title string | paper_decision string | review_1 string | rebuttals_1 string | review_2 string | rebuttals_2 string | review_3 string | rebuttals_3 string | review_4 string | rebuttals_4 string | global_rebuttals string | dataset_source string | conference_year int64 | review_5 string | rebuttals_5 string | review_6 string | rebuttals_6 string | review_7 string | rebuttals_7 string | review_8 string | rebuttals_8 string |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
DETAIL: Task DEmonsTration Attribution for Interpretable In-context Learning | Accept (poster) | Summary: This paper introduces DETAIL, a novel technique for attributing and interpreting in-context learning (ICL) demonstrations in transformer-based language models. The authors propose an adaptation of the influence function, typically used in conventional machine learning, to address the unique characteristics of ICL. DETAIL treats transformers as implementing an internal kernelized ridge regression, allowing for efficient and effective attribution of demonstrations. The method is evaluated on various tasks, including demonstration perturbation, noisy demonstration detection, and real-world applications such as demonstration reordering and curation. The authors demonstrate DETAIL's superiority over existing attribution methods in terms of both performance and computational efficiency.
Strengths: 1. This paper proposes a novel approach, i.e., DETAIL to address the specific challenges of ICL attribution by leveraging the internal optimizer perspective of transformers.
2. The method incorporates random matrix projection to reduce dimensionality, resulting in significant speedups (up to 10x) while maintaining effectiveness.
3. DETAIL is shown to be effective across multiple tasks, including noisy demonstration detection, demonstration reordering, and curation, demonstrating its broad applicability.
4. The paper provides extensive experiments on both custom transformers and large language models (LLMs) like Vicuna-7b and Llama-2-13b, validating the method's effectiveness.
5. The authors demonstrate that DETAIL scores computed on white-box models can transfer to black-box models like GPT-3.5, enhancing its practical applicability.
Weaknesses: 1. Section 5.1 presents an evaluation on a custom transformer using the MNIST dataset. While this provides an initial demonstration of DETAIL's capabilities, the paper doesn't clearly justify why this evaluation is necessary given the subsequent experiments on large language models. It's not immediately apparent how insights from this simplified setting transfer to more complex LLMs, potentially making this section feel disconnected from the main contributions of the paper.
2. Figure 3 shows that even the Llama-2-13b model achieves only 60-70% accuracy on the AG News dataset without any perturbation. This is significantly lower than the typical performance range of 85-96% reported in the literature for this dataset. Could the authors give more insight about the performance of in context learning on the AG News dataset?
Technical Quality: 3
Clarity: 3
Questions for Authors: see the weakness section
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Yes, the authors adequately addressed the limitations
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: # Response to the review of reviewer smfw
Dear reviewer smfw,
We would like to thank you for the review and for highlighting the novelty of our approach.
We would like to address your concern as follows.
### 1. Connection between the MNIST experiment and the LLM experiment
We wish to clarify that we provide the MNIST experiment mainly as a visualization to show how DETAIL works as we believe it is more intuitive to understand visual similarity than semantic similarity of text. We also wish to use the MNIST experiment to highlight that our approach applies to ICL with transformers in general although we focus on transformer-based language models. We appreciate your feedback on the disconnection between the two parts. We will make the transition smoother by highlighting the above clarification in our revision.
### 2. Accuracy on AG News
We believe two main factors are causing the gap in test accuracy. First, we use only $10$ demonstrations for ICL while AG News typically requires more than $30$ demonstrations to reach SOTA performance (e.g., Figure 3 (c) of [1]). Second, we modify the label names in our experiments to make sure it does not carry the semantic meaning to enforce ICL behavior (lines 171-176). The original label names (Worlds/Sports/Business/SciTech) make it easier for the language model to understand the task, leading to higher accuracy. The performance with altered label can also be observed in Figure 3 (c) of [1] (plotted line with legend $\tau_{\text{RA}}-ICL$) and is similar to the accuracy reported in our work.
Once again, we would like to thank you for taking the time to review. We hope that our response sufficiently addresses your concerns.
[1] Jiaoda Li et. al., "What Do Language Models Learn in Context? The Structured Task Hypothesis", ACL 2024. | Summary: This paper proposes a new method to estimate the influence of ICL examples to a query. This influence estimation can better help ICL learning, for example, reorder ICL examples and curating ICL examples.
Strengths: - The motivation is clear, and the paper overall is well-written.
- The idea to estimate the influence of ICL examples is interesting, and can help better ICL learning for LMs.
- Results in Table 2 seem to show that DETAIL performs better than existing influence functions (token-level) and also some recent work that estimates ICL influences.
Weaknesses: - More comprehensive experiments: for applying DETAIL, the authors only showed a few tasks (all classification) and on one major public model (Vicuna-7b). To show the effectiveness of DETAIL, more comprehensive experiment results should be provided: e.g., Table 1 and 3 should use more tasks other than classification, and more models (can the authors add results on Llama-2-13b?).
- DETAIL for ICL order optimization: based on the description in Section 5.3, it seems like the authors re-ordered the examples not just based on I_{self}, but also based on the trend plot in Figure 5. Does test accuracy w.r.t. perturbed position alway need to be computed first before deciding on the orders? This seems really expensive and also the choice of putting "two demonstrations" in the front seems very arbitrary. How would one use such a method for a random new task?
- The need to use white-box models and poor transfer results: DETAIL requires white-box model access, and the transfer result in Table 3 is not significant on the more realistic setting (no corruption). Also, can the authors show the transfer results with more models, e.g., LLama-2 to GPT-3.5, and Vicuna-7b to GPT-4?
Technical Quality: 3
Clarity: 2
Questions for Authors: See above.
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: Discussed in Section 6.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: # Response to the review of reviewer oPDt
Dear reviewer oPDt,
Thank you for the review and for acknowledging the motivation and main idea of our work.
We would like to address your concerns in the following. Kindly note that all citation numbers follow the reference list from the main paper.
### 1. Comprehensiveness of experiments
We wish to first highlight that our experiments serve mainly to demonstrate the interpretability of DETAIL's attribution and the applicability of DETAIL in various scenarios including demonstration curation and re-ordering. For models, in addition to Vicuna-7b, we have also considered Llama-2-7b, Llama-2-13b models in the paper, as well as an SSM architecture model, Mamba-2.8B, with some of the results in Appendix.
We acknowledge that while DETAIL itself can be useful in many tasks, our experiments only considers classification tasks currently. To extend our work to support other types of tasks, we may modify the loss function accordingly. For example, to support generation tasks where the ground truth is a sequence of desired outputs, we may modify the loss in Eq(3) to be an autoregressive cross-entropy loss instead of an MSE loss.
We thank you for suggesting including experiments for Table 1 and Table 3 using Llama-2-13b. Due to the character limit, we include the additional experimental results together with the transfer results in the PDF in the global rebuttal. We provide an analysis of the additional results in bulletpoint 4 below.
### 2. Placement of demonstrations in the re-ordering task
We would like to clarify that figure 5 serves to verify how test accuracy changes with the position of the corrupted demonstration. Figure 5 shows that placing corrupted demonstrations at the two ends generally leads to better test accuracy for various datasets (e.g., subj, SST-2, and Rotten tomatoes). Hence, we use the DETAIL score as a proxy for the corruptedness of the demonstration and place a few demonstrations with the highest DETAIL score in front. We expect the same trend to hold on a random new task and we can use the same technique (placing a few demonstrations with the highest DETAIL score in front). The additional experiments in bulletpoint 4 are all conducted by placing 2 demonstrations with the highest score in front.
### 3. White-box access and Transfer performance on re-ordering task
We acknowledge that DETAIL specifically considers the white-box setting to provide interpretable attribution. While methods under a black-box setting (e.g., [41, 46]) may offer wider applicability, they may also have limited interpretability and poorer performance due to the absence of access to the model's internal working (as shown in Table 2). We highlight that **interpretable attribution of ICL demonstrations is the main focus and contribution of this work**.
We acknowledge that transfer performance is less significant for the re-ordering task in Table 3 when no demonstration is corrupted. We think it is because GPT-3.5/GPT-4o can already achieve very high test accuracy with the baseline ordering ( baseline accuracy with no corrupted demo in Table 3 is on average 12.0 percentage points higher than that in Table 1).
However, while reordering demonstrations may not improve performance when there are no corrupted demonstrations, it does not deteriorate performance either, nor does it require heavy extra computation (only 1 pass). As such, reordering offers a way to obtain **"free" potential performance gain without the costly curation or obtaining more demonstrations**.
On your comment that it is less "realistic" to have corrupted demonstrations in the ICL dataset, we argue that although a carefully curated ICL dataset contains few corrupted demonstrations, in various real-world scenarios, ICL demonstrations might not come from a well-prepared dataset but are generated in real-time. For example, LLM-powered search engines (e.g., Perplexity.AI) obtain ICL demonstrations in real-time from web-scrapped data based on the user query, which very often contains corrupted demonstrations. Recent works that consider self-generating ICL demonstrations from LLM (e.g., Chen et. al. "SELF-ICL: Zero-Shot In-Context Learning with Self-Generated Demonstrations", EMNLP 2023) are also subjected to corrupted demonstrations due to LLM's hallucination. More importantly, as ICL demonstrations are typically generated in real-time in these scenarios, costly curation is not tractable and thus the proposed DETAIL-based re-ordering technique that can indeed be computed in real-time, can be highly effective.
### 4. Analysis of additional results on re-ordering task
We kindly refer you to the PDF attached in the global author rebuttal for the additional experimental results.
For Llama-2-13b, performance improvement with no corrupted demonstration is less significant compared to Vicuna-7b, although with 3 corrupted demonstrations, we can still observe 1-2% improvement. We attribute this to the stronger baseline performance (baseline (no corrupted) average accuracy on Llama-2-13b is 16.4 percentage points higher than that on Vicuna-7b).
For transfer performance, we observe that the performance gain is not significant with no corrupted demonstrations. With 3 corrupted demonstrations, GPT-4o shows limited performance gain, which is probably because GPT-4o has stronger pretrained prior and can achieve very high test accuracy even with some bad demonstrations. However, for GPT-3.5, the performance improves by more than 3% in this scenario. On the other hand, when 6 demonstrations are corrupted, re-ordering shows a significant improvement of over 3% on GPT-4o as well. This demonstrates the benefit of re-ordering, even for large models, with noisy demonstration data.
Thank you for providing a constructive review and the questions. We hope our response has helped address your concerns. We are glad to provide further clarifications.
---
Rebuttal Comment 1.1:
Comment: Thanks the authors for the rebuttal and adding the new results. Given all the results so far, the overall trend becomes a bit more clear:
(1) the method gives more significant improvement under the corruption setting (which is less realistic, and given the authors' argument on similarity to search retrieval, it would be more convincing to provide such results, even with a simple RAG setup);
(2) the methods does not yield significant gains when the baseline performance is already high (for GPT-4o it's ok, but for GPT-3.5 the gains still vary quite a bit, like there's no gain on "Subj" under the no corruption setting). It would be more convincing if the authors can show more consistent results across models/settings, and provide results on more challenging tasks where the baseline performance is not good enough, and see if DETAIL can still yield gains.
I will maintain my score given the limited effectiveness of the proposed method, but I won't object to accepting this paper.
---
Reply to Comment 1.1.1:
Title: Thank you for acknowledging our response
Comment: Thank you very much for acknowledging our response and suggesting a new experiment setting with RAG for search retrieval. We will include your suggestions, along with our clarifications and additional experimental results, in the revision of our work. | Summary: The paper introduces DETAIL, a novel influence-function based attribution technique to estimate the influence of each example in the demonstration sequence for the given target query for in-context learning (ICL). The authors empirically validate DETAIL’s effectiveness in stylized experiments and LLMs. Additionally, they demonstrate DETAIL’s applicability in real-world tasks like demonstration reordering and curation and the transferability of DETAIL’s attribution scores from white-box to black-box models.
Strengths: 1. Innovative Application of Influence Functions: The application of influence functions to interpret in-context learning (ICL) is both innovative and intriguing, offering a fresh perspective on model interpretability.
2. Transferability to Black-Box Models: The method demonstrates promising results in transferring attribution scores from white-box models to black-box models, which significantly enhances its practical applicability.
3. Promising Performance: The empirical performance of DETAIL is promising, showcasing its potential effectiveness in real-world scenarios
Weaknesses: 1. Strong Assumptions: The work is built on the assumption that transformers implement an internal optimizer. This assumption, while supported by some theoretical proof in stylized settings, may not universally hold. The proof further assumes a learn-to-learn setting for ICL, which differs from the definition provided in lines 88-105 in this paper. Additionally, Equation 3 assumes the loss function of ICL is a ridge regression function without robust theoretical guarantees.
2. Lack of Analysis for Equation 6: Given the strong assumptions underlying the derived Equation 6, its reliability may be questionable.
3. Sensitivity to Demonstration Ordering: Recent work has demonstrated that ICL is not sensitive to ordering with more powerful demonstration retrievers and advanced language models, contradicted with the conclusion in line 35.
4. Limited Experimental Comparisons: The experimental validation of DETAIL in ICL demonstration curation could be strengthened. The authors should compare their method with other learning-free demonstration selection methods, such as those based on BERT embeddings.
5. Computational Cost: The computational cost of DETAIL is relatively high for demonstration curation or selection. Existing methods that utilize BERT embeddings are more computationally efficient.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. How do you obtain $y_{test}$ for $I_{test}$?
2. Have you explored initializing $m(x)$ and $y$ with BERT embeddings? This could be a potential area for improvement, leveraging pre-trained embeddings for better initialization.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: # Response to the review of reviewer EuFx
Dear reviewer EuFx,
Thank you for your review, compliment for the innovative application of the influence function in our work, and the acknowledgment of the real-world applications of DETAIL.
We would like to address your concerns as follows. Kindly note that all citation numbers follow the reference list of the main paper.
### 1. Strong assumption
We wish to clarify that while the pioneering work [51] provided the theoretical proof under the learn-to-learn setting, follow-up works have theoretically shown that transformers can implement internal optimizers **even when they are pre-trained on random instances** [2, 52]. [20] further demonstrated the internal optimizer phenomenon w.r.t. GPT.
While we acknowledge that the kernelized ridge regression does not have a formal theoretical guarantee, it should be noted that the formulation closely follows that mentioned in [51] with slight modification, as we apply it with the influence function (lines 128-131). Following the kernelized ridge regression, equation 6 is then rigorously derived by following the exact definition of the influence function.
### 2. Sensitivity to demonstration ordering
We would appreciate it if you could kindly provide sources that have made this claim so that we could better address this concern.
Assuming that it is the case where very large models are indeed insensitive to the ordering of demonstrations, we highlight that the claim in line 35 is based on previous works considering relatively smaller language models (typically less than 10b) [35, 37].
Smaller language models are also widely used in real-life scenarios because of their cost-effectiveness (a 7b model can be run on a personal PC). In these applications, the models typically suffer from the problem of being sensitive to demonstration ordering.
Additionally, while it is understandable that very large models are more robust to re-ordering, recent work (e.g., Table 12 of Wu et. al., Arxiv 2405.16122) has shown that there can be up to 10% accuracy difference when large noise is injected by carefully optimizing the order of demonstrations on GPT-3.5.
### 3. Curation experiment with BERT embedding
Thank you for suggesting the additional comparison baseline (i.e., curating demonstration with BERT-based methods). We have conducted an additional experiment under the setting of Table 2 using BERT score from a popular sentence-transformer model `sentence-transformers/msmarco-bert-base-dot-v5`. The results are tabulated below:
| Metric | DETAIL ($d=1000$) | DETAIL ($d=50$) | BERT Score |
| --- | --- | --- | --- |
| Subj (Accuracy) | **0.747 (2.60e-02)** | 0.713 (2.43e-02) | 0.671 (2.43e-02) |
| SST2 (Accuracy) | **0.607 (2.12e-02)** | 0.530 (4.09e-02) | 0.477 (2.54e-02) |
| Rotten tomatoes (Accuracy) | 0.555 (1.94e-02) | **0.557 (2.32)** | 0.435 (1.33e-02) |
| AG News (Accuracy) | **0.412 (1.35e-02)** | 0.387 (1.32e-02) | 0.355 (1.58e-02) |
| |
| Subj (Wall time) | 5.22 (1.17e-01) | 3.18 (4.63e-02) | **2.97 (3.05e-02)** |
| SST2 (Wall time) | 4.88 (1.35e-01) | **2.65 (4.09e-02)** | 2.91 (2.78e-02) |
| Rotten tomatoes (Wall time) | 5.11 (1.06e-01) | 2.97 (5.36e-02) | **2.95 (4.43e-02)** |
| AG News (Wall time) | 10.4 (1.07e-01) | 6.17 (7.30e-02) | **3.15 (5.87e-02)** |
While we acknowledge that BERT score is relatively faster than DETAIL ($d=1000$), the speed gain is mainly due to **BERT scores being model-agnostic** and thus likely comes at the cost of poorer performance compared to DETAIL (as reflected by the lower accuracy in the table above). We wish to highlight that one advantage of **DETAIL is that its attribution is dependent on the transformer used** which provides more interpretability and improved performance.
Moreover, while BERT score is fast, we note that DETAIL is also very computationally efficient: DETAIL is much faster than most attribution methods shown in Table 2. DETAIL can be even faster by reducing $d$. As shown in the table, when $d=50$, DETAIL has a comparable running time as BERT score while still achieving higher accuracy.
### Questions
1. Obtaining $y_{\text{test}}$: We wish to clarify that $y_{\text{test}}$ is part of $z_{\text{test}} = (x_{\text{test}}, y_{\text{test}})$ (line 103) which refers to the ground truth label of the query sample. $y_{\text{test}}$ is provided as part of the test dataset.
2. Incorporating BERT embedding in $m(x)$: Thank you for providing the insightful suggestion to incorporate BERT embedding in calculating $m(x)$. We believe it is an exciting direction for future research to consider a combination of BERT and transformer embeddings to improve performance. We have conducted a preliminary experiment with three different $m(x)$ computation tabulated below. For the transformer embedding, we use $d=768$, the same size as the BERT embedding. When using BERT embedding only, the performance is worse than DETAIL. Interestingly, using an equal-weighted average results in SOTA test accuracy on 3 out of 4 datasets.
| Metric | Bert Embedding Only | Weighted Average (equal weight) | Concatenation |
| --- | --- | --- | --- |
| Subj (Accuracy) | 0.665 (2.36e-02) | **0.758 (1.97e-02)** | 0.719 (2.75e-02) |
| SST2 (Accuracy) | 0.475 (1.31e-02) | 0.579 (2.34e-02) | 0.596 (1.96e-02) |
| Rotten tomatoes (Accuracy) | 0.510 (1.85e-02) | **0.575 (2.46e-02)** | 0.570 (2.75e-02) |
| AG News (Accuracy) | 0.408 (2.10e-02) | **0.425 (1.76e-02)** | 0.412 (1.50e-02) |
| |
| Subj (Wall time) | 5.96 (3.05e-01) | 8.05 (1.73e-01) | 7.89 (1.33e-01) |
| SST2 (Wall time) | 5.24 (1.63e-01) | 7.61 (1.69e-01) | 7.51 (2.01e-01) |
| Rotten tomatoes (Wall time) | 5.25 (2.93e-01) | 9.18 (1.78e-01) | 9.11 (1.57e-01) |
| AG News (Wall time) | 4.30 (8.07e-02) | 12.8 (2.00e-01) | 11.2 (1.73e-01) |
Thank you again for providing a constructive review and offering insightful suggestions. Please let us know if there are any remaining questions. We are glad to offer further clarifications.
---
Rebuttal Comment 1.1:
Title: Reply to Authors' Rebuttal
Comment: Thanks for the authors' detailed responses to my questions. The additional experiments and discussions are helpful and provide more evidence supporting its usability. Hence, I would like to raise my score.
---
Reply to Comment 1.1.1:
Title: Thank you for acknowledging our response and raising the score
Comment: Thank you very much for acknowledging that the additional experiments in our response are detailed and helpful and for raising the score. We will incorporate the experimental results and our clarifications in our revision. | Summary: The paper proposes a novel attribution method for demonstrations in ICL. The proposed method takes a perspective that the transformers learn in context by formulating an internal optimizer. The influence function is approximated as an internal kernelized ridge regression, where the representations are taken from the intermediate layers of the transformers and white-box LLMs. The paper also demonstrate that the attribution obtained from the white-box LLM exhibits transferable characteristics to black-box models. The paper also showcases a few promising applications of DETAIL.
Strengths: - By and large, the paper is written well. I especially appreciated the discussion on the relations to reinforcement learning, and how the potential functions differ from Lyapunov functions used in more classical settings.
- The idea of using the perspective that transformers formulating an internal optimize for attribution is novel and useful.
- The section on accelerating the computation with self-influence is also interesting, and highlights the authors attention to computational issues.
- The application of the DETAIL may open new research directions in LLMs for bettter instruction-tuning algorithms.
- The mathematical derivations appear to be correct.
Weaknesses: - It would have been interesting to see how well this method would identify adversarial attacks for LM.
- Some additional commentary on how DETAIL can help with LLM training would have been very helpful.
Technical Quality: 4
Clarity: 3
Questions for Authors: - Could you comment on whether DETAIL can be used to discover adversarial attacks of LLMs?
- Could you comment on how DETAIL can help with LLM training especially for instruction tuning?
Confidence: 4
Soundness: 4
Presentation: 3
Contribution: 3
Limitations: Yes, the author addressed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: # Response to the review of reviewer ixwr
Dear Reviewer ixwr,
Thank you for reviewing our paper and highlighting the novelty of our application of 'influence' for ICL which has the potential to inspire better instruction-tuning algorithms.
We would like to address your concerns in the following.
### 1. Identifying adversarial attacks for LM
We thank you for suggesting this exciting potential use case. In this work, we have demonstrated how DETAIL can identify corrupted demonstrations, which is one form of adversarial attack where an attack is performed by perturbing the ICL demonstration label (Table 1 and Table 3). We believe it is an interesting area to research how DETAIL can be applied to other types of attacks. For example, DETAIL might be used to defend against attacks that attempt to steal private ICL demonstrations by filtering out or rephrasing queries that cause excessively high DETAIL scores.
### 2. Additional comment on how DETAIL can help with LLM training
We thank you for the question. While we primarily focus on in-context learning settings in this work, the DETAIL score can also potentially be used for selecting demonstrations for instruction fine-tuning. One way to perform the selection is to compute the DETAIL score of each ICL demonstration on a validation set and then use the demonstrations with the highest DETAIL scores for fine-tuning.
We will incorporate the mentioned points in the future work section in our work.
Once again, we would like to thank you for the insightful suggestions. We hope our response has been helpful in addressing your questions. | Rebuttal 1:
Rebuttal: We thank all the reviewers for taking the time to provide constructive reviews and insightful suggestions. We have addressed the concerns in the respective rebuttal sections. As a general response, we would like to highlight that the main contribution of this work is to provide an interpretable attribution for ICL demonstrations on transformers that is also computationally efficient. Our experiments demonstrate DETAIL's effectiveness in attribution as well as in applications such as curation and re-ordering.
We have attached a PDF below including additional experiments on the re-ordering task. For additional experiments that consider BERT-related methods, please refer to our response to reviewer EuFx.
We are more than happy to clarify any additional questions regarding our rebuttal during the author-reviewer discussion period.
Pdf: /pdf/acd391165c6235dddff3d79c65bcb361456eac48.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
BECAUSE: Bilinear Causal Representation for Generalizable Offline Model-based Reinforcement Learning | Accept (poster) | Summary: Distribution shifts in the sampling of MBRL will lead to the objective mismatch between model and policy learning.
Therefore, this paper aims to reduce the influence of the distribution shift for MBRL.
It divides the confounders into $\mu_{\pi}$ and $\mu_{c}$ and then tries to capture causal representations to reduce the influence of the distribution shift, thus mitigating the objective mismatch problem.
Strengths: 1. This paper is well-written and easy to follow.
2. In RL, this paper tries to address confounders, which are key problem in causal inference.
3. There are some new theoretical results in this paper.
4. Experiment results illustrate the effectiveness of the proposed method.
Weaknesses: 1. The conditions in Bilinear MDPs are difficult to achieve in real-world environments.
2. Some real-world examples can be given for confounders $\mu_{\pi}$ and $\mu_{c}$.
Technical Quality: 2
Clarity: 2
Questions for Authors: 1. What is the world model in this paper? Does this paper use a world model to train the policy?
Confidence: 3
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 2
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to express our gratitude to reviewer 8enR for their valuable feedback. The reviewer acknowledges the clarity of our problem formulation and presentation, as well as the theoretical and empirical contribution of this work. We will answer the main questions below:
**Q1. (Generality of Bilinear MDP Formulation): The conditions in Bilinear MDPs are difficult to achieve in real-world environments. Some real-world examples can be given for confounders $u_\pi$ and $u_c$.**
We thank the reviewer for raising these important questions about the generality of bilinear MDP formulation. We discuss two categories of confounders in our experiment, which simulate some real-world problems like indoor navigation, robotic manipulation, and self-driving cars. One real-world example about the two categories of confounders $\mu_\pi$ and $\mu_c$ is in the autonomous driving environments. We have the states ['time-to-collision', 'daytime'] and actions ['throttle', 'steering']. Time-to-collision is the minimum ratio of relative distance to the closest vehicle divided by the velocity. The offline dataset may have two sources of confounders:
- **Confounders from suboptimal behavior policy:** $u_\pi$ is the confounder introduced by suboptimal behavior policies. For example, one aggressive driver always overtakes the front vehicles. This sub-optimal behavior policy $\pi_\beta(a | s)$ may have a large throttle even when the 'time-to-collision' is extremely small. Such behavior policies introduce confounders that cause spurious correlations between states ('time-to-collision') and actions (throttle). This spurious correlation may cause collision and have low reward if we directly learn from this dataset and deploy it online. In our experiment, we denote different behavior policies as 'random', 'medium', and 'expert' policies, leading to different confounded levels of $u_\pi$.
- **Confounders from environment dynamics**: $u_c$ is the confounder that exists in the transition dynamics $T(s'|s, a)$, and it causes spurious correlation between different state and action components in transition dynamics. In the real-world example, the traffic will be more crowded during the peak hours in a day and less crowded in the rest. Due to the confounder for 'crowdedness' and 'daytime', the model may fail under crowded scenarios out of peak hours, because it will falsely predict a lower 'time-to-collision' outside peak hours even if the traffic is dense. This is because the model is misled by the spurious correlation between 'time-to-collision' in $s'$ and daytime in $s$. In our experiment, we also have different confounders in different tasks in all 3 environments. We have discussed the environments in Appendix C.5.
We visualize the above confounded case in the MDP as **Figure (a) and (b) in the rebuttal supplementary**.
**Q2. (Clarification of World Model): What is the world model in this paper? Does this paper use a world model to train the policy?**
We thank the authors for raising the clarification about the world model. In our setting, our world model is composed of a dynamics model $T(s' | s, a)$ and a reward model $r(s, a)$. More concretely, during the planning phase, we have the following pessimistic value iteration:
$$
\overline{Q}(s, a) = r(s, a) - E_\theta(s, a) + \sum_{s'\in \mathcal{S}} \widehat{T}(s'|s, a) \widehat{V}(s').
$$
Therefore, optimization of our pessimistic policy will be dependent on the learned dynamics model $\widehat{T}(s' | s, a)$ and reward function $r(s, a)$.
---
Rebuttal Comment 1.1:
Comment: World models were first defined in reference [1].
Many well-known reinforcement learning algorithms, such as Dreamer [2], are based on the world model.
Redefining world models are not appropriate, so I changed my score to 3.
[1] David Ha and Jürgen Schmidhuber. World models. arXiv preprint arXiv:1803.10122, 2018.
[2] Danijar Hafner, Timothy Lillicrap, Jimmy Ba, and Mohammad Norouzi. Dream to control: Learning behaviors by latent imagination. In International Conference on Learning Representations, 2019.
---
Rebuttal 2:
Comment: Dear Reviewer 8enR,
While the reviewer discussions have not officially started, I would like to discuss why the definition of 'world model' presented here is inappropriate. In my opinion, the world model could be defined as the dynamics model and the reward model in the MDP, similar to what the authors have mentioned.
I’m not saying that every aspect of this paper is flawless, but I believe rejecting it solely based on this definition might be a bit unfair. I would appreciate your thoughts on this and welcome any corrections if am wrong.
Reviewer XKke
---
Rebuttal Comment 2.1:
Title: Dear Reviewer XKke
Comment: Dear Reviewer XKke
I think publishing such a definition would mislead many readers, so this is a serious mistake.
Reviewer 8enR
---
Rebuttal 3:
Comment: The world model consists of all the planner's knowledge of the past, present, and future, expressed in a temporal logic [6].
To optimize POMDP problems, we should consider not only present state but also the historical information to predict the future states or rewards, therefore world model can be used to optimize POMDP tasks.
However, because of the Markov property, we just need to consider present state in MDPs, which are the key problem in this paper. Can you find any literature, in which world model is for MDPs and does not consider historical information?
[6] Allen, James F., and Johannes A. Koomen. "Planning using a temporal world model." Proceedings of the Eighth international joint conference on Artificial intelligence-Volume 2. 1983.
---
Rebuttal Comment 3.1:
Comment: We thank reviewer 8enR for engaging in the followup discussion. There are a few recent works that use 'world model' in the MDP formulation, including but not limited to [1, 2, 3].
Denoised MDPs [1] formulate their problem in MDPs instead of POMDPs in their problem formulation and discuss how the MDP formulation is generic enough for their world model learning problem:
> For generality, we consider tasks in the form of Markov Decision Processes (MDPs), described in the usual manner: $\mathcal{M}\triangleq (\mathcal{S}, \mathcal{A}, R, P, p_{s_0})$ (Puterman, 1994), where $\mathcal{S}$ is the state space, $\mathcal{A}$ is the action space, $R$: $\mathcal{S}\to \Delta([0, r_{max}])$ defines the reward random variable $R(s')$ received for arriving at state $s'\in \mathcal{S}$, $P: \mathcal{S} \times \mathcal{A} \to \Delta(\mathcal{S})$ is the transition dynamics, and $p_{s_0} \in \Delta(\mathcal{S})$ defines the distribution of initial state. We use $\Delta(A)$ to denote the set of all distributions over $A$. $P$ and $R$ define the most important components of a MDP: the transition dynamics $P[s'| s, a]$ and the reward function $P[r | s']$. Usually, the objective is to find a policy $\pi: S\to \Delta(A)$ acting based on current state, that maximizes the expected cumulative (discounted) reward.
>
> Indeed, MDPs provide a general formulation that encompasses many tasks. In fact, the entire real world may be viewed as an MDP with a rich state/observation space S that contains all possible information/signal. For an artificial agent to successfully perform real-world tasks, it must be able to process observations that are incredibly rich and high-dimensional, such as visual or audio signals.
As we mentioned in our response, our description of world model follows [1], which is formulated in the MDP setting. We also want to emphasize that **our key contribution is the bilinear causal representation learning**. If reviewer 8enR still feels using 'world model' may affect their justification on our contribution, we are happy to add more remarks in our appendix or **change 'world model' to other terms that reviewer 8enR feels more appropriate**. We believe this change won't have any influence to the core contribution of this paper. Once again, we hope the reviewer’s concern will be resolved, if they can clarify their demands during the discussion phase.
> [1] Wang, Tongzhou, et al. "Denoised mdps: Learning world models better than the world itself." arXiv preprint arXiv:2206.15477 (2022).
>
> [2] Zhu, Zheng-Mao, et al. "Offline reinforcement learning with causal structured world models." arXiv preprint arXiv:2206.01474 (2022).
>
> [3] Liu, Jingbin, Xinyang Gu, and Shuai Liu. "Reinforcement learning with world model." arXiv preprint arXiv:1908.11494 (2019).
---
Rebuttal 4:
Comment: **I did not mean that the current state encoded all historical information.**
In MDPs, the historical information cannot change the probability distributions of $s_{t+1}$, however, with the help of the context in historical information, we may construct a new causal function for MDPs.
I would like to keep my score.
---
Rebuttal Comment 4.1:
Comment: Thanks for the extensive discussion on the term of "World Model". I think it's useful to clarify the definition of the models learned in this paper to avoid potential ambiguity with the term in the literature.
Reviewer 8enR, do you have additional major concerns that led to your current rating apart from the use of "world model"? | Summary: The paper introduces a method to improve model-based offline reinforcement learning by addressing the objective mismatch problem using causal discovery methods. This problem stems from the fact that in this setup the learning algorithm aims to solve the following two problems at once: i) accurate learning of the transition dynamics, ii) identifying the optimal policy on the MDP characterized by the learned transition dynamics. However in practice, an improved solution to one of these two problems does not translate to a better solution to the other one, while in theory it should. The paper's central hypothesis is that this disconnect is caused by the spurious correlations learned by the model which can be blocked by a causality based formulation.
Strengths: * The proposed methodology is very sensible and interesting. Formulating a latent bilinear MDP to enable differentiable causal discovery approaches is a brilliant idea.
* The paper studies the theoretical properties of the presented approach. Although the used theoretical tools are not novel per se, they serve for their purpose excellently. The conceptual connection between causal representation learning and bilinear MDPs is indeed novel and worthwhile acknowledging.
* Despite the denseness of the technical material, the paper is very easy to read and understand.
* The presented empirical results are satisfactory.
Weaknesses: * The compared baselines do make sense. Some of them such as ICIL and CCIL are causal RL approaches. However, the experiments part could be even stronger if the chosen model-based ORL approaches represented the state of the art a little better. TD3+BC and MOPO are outdated. One could for instance consider to compare against a stronger baseline within the same family, such as:
Sun et al., Model-Bellman Inconsistency for Model-based Offline Reinforcement Learning, ICML, 2023.
* It would be interesting to see a demonstration of how well BECAUSE can recover the underlying confounders of a simple MDP. This could be an interesting research question to be investigated in the experiments section. This would enhance the value of the proposed approach from an instrumental to an explanatory one.
Technical Quality: 4
Clarity: 4
Questions for Authors: Why was it not possible and/or sensible to study the problem in the most well-known benchmark of offline RL research, the MuJoco environments of D4RL?
Confidence: 4
Soundness: 4
Presentation: 4
Contribution: 4
Limitations: The paper discusses the limitations of the proposed approach in the final few sentences of the conclusion section. However, it does not discuss the potential negative societal impact of the presented work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer for their insightful and inspiring feedback. We are glad to know that the reviewer recognizes the novelty of our contributions, the clarity of our problem formulation, and the empirical results. We answer reviewers' questions about the causal learning process through the bilinear MDP optimization below.
**Q1. (Advanced Model-based Offline RL baselines): The compared baselines do make sense. Some of them such as ICIL and CCIL are causal RL approaches.... consider to compare against a stronger baseline within the same family, such as: (Sun et al., ICML, 2023.).**
We thank the reviewer for proposing this important baseline. We add additional experiments on the *Unlock* environment and illustrate the results in the first table in the general response. While MOBILE indeed demonstrates promising performance in the OOD generalization setting compared to the MOPO baselines, we still observe significant advantage on our methods that learns bilinear causal representation in the world model and use them for planning.
**Q2. (Discussing recovery and explainability of confounders): It would be interesting to see a demonstration of how well BECAUSE can recover the underlying confounders... enhance the value of the proposed approach from an instrumental to an explanatory one.**
As you can see in **Figure (a) and (b) in the rebuttal supplementary**, we give a simple MDP example in the autonomous driving tasks. We have the states ['time-to-collision', 'daytime'] and actions ['throttle', 'steering']. (Time-to-collision (TTC) is the ratio of relative distance to the closest vehicle divided by the velocity, and the TTC gets closer to 0 if a collision happens).
In general, the offline dataset encounters two sources of confounders:
- **Confounders from suboptimal behavior policy:** As is shown in **Figure (a)**, $u_\pi$ is the confounder introduced by suboptimal behavior policies. For example, one aggressive driver always overtakes the front vehicles. This sub-optimal behavior policies $\pi_\beta(a | s)$ may have large throttle even when the 'time-to-collision' is extremely small. Such behavior policies introduce confounders that cause spurious correlations between states ('ttc') and actions ('throttle'). This spurious correlation may cause collision if we directly learn from this dataset and deploy online. In our experiment, we denote different behavior policies as 'random', 'medium', 'expert' policies, leading to different confounded levels of $u_\pi$.
- **Confounders from environment dynamics**: $u_c$ is the confounder that exists in the transition dynamics $T(s'|s, a)$, and it causes spurious correlation between different state and action components in transition dynamics. In the driving example, as shown in **Figure (b)**, the traffic is more crowded (lower ttc) during the peak hours in a day and less crowded (higher ttc) in the rest. With confounder between next 'ttc' and current 'daytime', raw model fails to predict well under crowded scenarios out of peak hours, because it will falsely predict a lower 'ttc' outside peak hours even if the actual traffic is dense. This is because the model is misled by the spurious correlation between 'time-to-collision' in $s'$ and daytime in $s$. In our experiment, we also have different confounders in different tasks in all 3 environments (see Appendix C.5.)
Additionally, we are running some numerical simulations in this simple example, BECAUSE can indeed help identify the transition dynamics. We plan to add this to our final experiment results.
**Q3. (Discussing selected benchmark): Why was it not possible and/or sensible to study the problem in the most well-known benchmark of offline RL research, the MuJoco environments of D4RL?**
Our experiment simulator is adopted by some published works [1, 2, 3] in the causal RL domain. The selected environments have strong complexity in causal structure that leads to distribution mismatch between different training environments. Moreover, they are related to some real-world problems, such as indoor navigation, manipulation, and autonomous vehicles. These tasks all require strong reasoning capability to understand the interaction between the decision-making agents and other static or dynamic environment entities.
On the other hand, in the mujoco-based D4RL environments, such cause-and-effect structures mainly focus on some physical transformation of the locomotion tasks, which have lower complexity as they do not model the interaction between different entities using the causal structure.
>[1] Wang, Zizhao, et al. "Causal Dynamics Learning for Task-Independent State Abstraction." International Conference on Machine Learning. PMLR, 2022.
>
>[2] Ding, Wenhao, et al. "Generalizing goal-conditioned reinforcement learning with variational causal reasoning." Advances in Neural Information Processing Systems 35 (2022): 26532-26548.
>
>[3] Ding, Wenhao, et al. "Seeing is not believing: Robust reinforcement learning against spurious correlation." Advances in Neural Information Processing Systems 36 (2023).
**Q4. (Adding societal impact): The paper discusses the limitations of the proposed approach in the final few sentences of the conclusion section. However, it does not discuss the potential negative societal impact of the presented work.**
We thank the authors for raising the concern about potential negative societal impact. We added the following dicussion of societal impacts in our appendix:
*Incorporating causality into model-based reinforcement learning framework provides the explanability of decision making, which helps humans understand the underlying mechanism of algorithms and check the source of failures. However, the learned causal graph may contain human-readable private information about the offline dataset, which could raise privacy issues. To mitigate this potential negative societal impact, the discovered causal graphs should be only accessible to trustworthy users.*
---
Rebuttal Comment 1.1:
Title: Keep score
Comment: Thanks for your satisfactory answer. I keep my score.
---
Rebuttal 2:
Comment: We appreciate the dedicated efforts of all the insightful reviews and acknowledgment from reviewer ne9G. The contributive review helps improve our paper's quality during the revision and rebuttal phase. | Summary: This paper aims to learn causal representations in offline RL. The authors propose a novel framework that incorporates causal graph learning through bilinear MDPs to manage mismatches in model learning and policy optimization stages. The framework models two types of confounders: ones affecting state dynamics and those influencing behavior policies. To enhance learning, the framework includes re-weighting and uncertainty quantification as well. Empirical results demonstrate the framework’s effectiveness in both causal discovery for model learning and generalization in offline RL policy learning.
In general, I hold a borderline but positive rating in this initial review. I will consider increasing my rating if the authors could give clarifications on my questions in the following sections (especially questions 1, 2, 3).
Strengths: - [*Motivation and general framework*]: The idea of using causality to model the data generation process in RL (or in general, dynamic systems) gives a principle way to model RL environments. By considering and modeling potential confounders, which often lead to distribution shifts or mismatches, the paper addresses a critical challenge in RL. The decomposition of confounders into dynamics and policy-related aspects is reasonable and well-motivated in this work.
- [*Technical soundness*]: The model and optimization processes are technically robust. The proofs and analyses provided in the appendix have been checked and are generally correct, offering complementary insights to the main part of the paper.
- The writing is clear and easy to follow,
Weaknesses: Since here most of the weaknesses and questions are mixed. So I put both here. One of the major weaknesses, which is also a primary question, is the inherent difficulty in fully learning the causal process through the bilinear MDP optimization (details in question 1-3).
**Question 1**: [Learning the causal graph by learning $M$] While learning M can partially capture the G graph, two main concerns: (1). Feature space entanglement: In the feature space $\phi$ and $\mu$, the causal variables might be entangled, potentially making the $G$ graph inaccurate (hard to identify from mixed sources). Please clarify if I have any misunderstanding on this; (2) optimization on the search space: Does the optimization search space of $M$ empirically impact the ability to learn the true $G$ graph?
**Question 2**: [About assumption 1] The invariant causal graph assumption simplifies the problem but might be problematic. Specifically: (1). Mapping invariance: $M(u)$ may not be invariant since the scale of confounder effects matters, especially when some latent confounder effects are too small to model or identify in some domains (while in other domains are easy to identify); (2). Deatils on equivalence with assumptions in previous works: Further explanation, preferably with formulas, is needed to clarify the equivalence of this assumption with those in other works (e.g., invariant state representation, action effect) as mentioned in Remark 1.
**Question 3**: [Multiple sources of confounders and identifiability of the causal graph]: How well can M capture different sources of confounders, especially state dynamics confounders $u_c$? How can we ensure the method separates these confounder sources, which is crucial in many RL environments with multiple unobservable latents?
**Question 4**: [Further structure decomposition in world model]: Can the method incorporate algorithms like denoised MDPs to handle cases where state dynamics confounders affect only a subset of states? This could potentially make M more compact.
**Question 5**: [Experimentation with raw pixel inputs]: Can the method work with raw pixel inputs, similar to experiments in works like IFactor cited in the paper? Will it still be effective when learning a visual encoder to extract states?
Technical Quality: 2
Clarity: 3
Questions for Authors: I listed all the questions (mixed with weaknesses) in the above section.
Confidence: 3
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: Limitations have been discussed in the conclusion section. Though there is no in-depth discussion on societal impacts, I don't find any particular societal impacts in this paper compared with other RL works.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to express our gratitude to reviewer XKke for their comprehensive and insightful feedback. We are glad to know that the reviewer recognizes the clarity of our problem formulation, the novelty and technical soundness of the proposed method, as well as the theoretical and empirical results. We answer key clarification questions about the causal learning process through the bilinear MDP optimization below.
**Q1. (Learning the causal graph by learning $M$)**:
For the first concern on the feature space entanglement, our learning process alternates between feature learning and causal mask learning, therefore, the causal variables $\phi, \mu$ will best align with the causal relationship. In some harder cases with poor offline data quality (like the *random* behavior policy), we do observe a performance drop and significant objective mismatch issues as is shown in Figure 6. However, our methods still outperform baselines in these cases with the help of bilinear causal representation. Besides, our method is also robust under limited sample size or various spurious level of confounders, as is demonstrated in Figure 4b and 4c.
For the second concern on the optimization on the search space, we'd like to first clarify the relationship between M and G under the formulation of bilinear MDP:
$$
G=\\left[\\begin{matrix} 0^{d\\times d} & M \\\\ 0^{d'\\times d} & 0^{d'\\times d'} \\end{matrix}\\right]\\in \\mathbb{R}^{(d+d')\times (d+d')}.
$$
The optimization of $M$ is a conditional independence testing (CIT) on $d\times d'$ pairs of features. Empirically, our causal discovery process is doing CIT on all the elements in the $d'\times d$ matrix. The optimization search space of $M$ is constrained by the size of $M\in \mathbb{R}^{d'\times d}$. Since the causal discovery is based on hypothesis testing statistics like $\chi^2$ testing, the ability to identify true causal graph $G$ will not be affected by the size of $M$.
**Q2. (About assumption 1)**:
We thank the reviewer for asking insightful questions on our core assumption.
Regarding the first concern on the mapping invariance, if the confounder effects are small as the reviewer mentioned, it means either (a) the confounder is not important to the success of our task in either domain or (b) the dataset we have at hand is too biased to fully learn about true causality.
- For case (a), we empirically verify the invariance of the discovered causal graph with an experiment on the robustness to the spurious level of confounder, as is mentioned in RQ2 of Section 4.2. In this experiment, we vary the spurious levels (i.e. the scale of confounding effects) by changing the number of confounders in the environments. We show a consistent performance with fewer performance degradation compared to baselines without this bilinear causal representation, as is shown in Figure 4.c. As long as these latent confounders have enough statistical evidence to be identified with causal discovery, we can learn good causal representation.
- For case (b), which mostly corresponds to the 'random' policy setting where the behavior policy is heavily confounded by the policy confounder $u_\pi$, BECAUSE empirically has better performance compared to other causal RL or MBRL methods.
In conclusion, in either case (a) or (b) raised by the reviewer, BECAUSE can empirically outperform other MBRL baselines, which justifies our assumption of invariant causal mask.
Regarding the second concern on the equivalence with assumptions in previous works, we add some additional discussion in **Table I of our 1-page rebuttal supplementary** with a mathematical description of the specific meaning of the invariant causal graph and how it relates to the assumptions cited in our manuscripts.
**Q3: (Multiple sources of confounders and identifiability of the causal graph)**:
The learning process of $M$ is conducted with two stages: regularized world model learning to deconfound $u_c$ in Equation (5) and (6), and policy-conditioned reweighting formulas to deconfound $u_\pi$ in (7).
- The first stage is to use the $\ell_0$ objective to regularize the world model loss while maintaining a sparse $M$.
- In the second stage, we average $M$ by reweighting its expectation across different trajectory batches.
We demonstrate the effect of the above deconfounding process both empirically and theoretically:
- **Experiment** results show that our method outperforms the existing causal RL and model-based RL baselines in both in-distribution and OOD settings in three environments.
- **Theoretically**, we provide additional analysis on how well this de-confounding process can identify the causal matrix $M$. Under reasonable assumptions about the offline dataset and underlying dynamics model, we can bound the estimation error of $M$ and have performance guarantees for learned policy in Theorem 1.
**Q4: (Further structure decomposition in world model)**:
We thank the reviewer for providing this inspiring extension of our work. As we discuss in the related works, Denoised MDPs propose a structure decomposition according to the controllability and reward relevance of the state. To ground our bilinear representation in this context, one interesting way is to decompose $\phi, \mu$ into the controllable/non-controllable and reward-relevant/irrelevant parts. This will be an interesting future direction to make $M$ more compact.
**Q5: (Experimentation with raw pixel inputs)**:
To further demonstrate the scalability of our method, we conduct additional experiments with raw-pixel inputs in the *Unlock* environments. The RGB input has a shape of (192, 192, 3), the detailed results can be found in the general response.
---
Rebuttal Comment 1.1:
Comment: Thank you for your detailed response and clarification. Most of my concerns have been well addressed, especially with the newly added Table 1 and the additional results on pixel experiments. I just have one minor follow-up question: for Q3, my initial question was actually about whether maintaining a sparse $M$ here could be the necessary and sufficient condition for estimating the multiple confounders (e.g., different confounders affect different components in the state dynamics). Could the authors provide more insights on this point?
For now, given the current quality and the rebuttal, I would increase my rating to 6.
---
Rebuttal 2:
Comment: We thank reviewer XKke for acknowledging our response with additional theoretical and empirical results. We will answer your last clarification question below.
**I just have one minor follow-up question: for Q3, my initial question was actually about whether maintaining a sparse $M$ here could be the necessary and sufficient condition for estimating the multiple confounders (e.g., different confounders affect different components in the state dynamics). Could the authors provide more insights on this point?**
Given our current assumption about the Action State Confounded MDP (ASC-MDP) and Structured Causal Model (SCM), the $\ell_0$ sparsity regularization we apply in the optimization of $M$ will be a **necessary yet not sufficient** condition for estimating all the latent confounders behind the causal graph.
- The **necessity** of sparsity regularization in $M$ is clear that we need to avoid spurious correlation by 'trimming down' as many unnecessary edges in the transition dynamics while maintaining good prediction accuracy.
- On the **sufficiency** part, although we cannot guarantee a complete recovery of true causal matrix $M$, we can bound estimation error of $M$ by some polynomial of the following terms:
- SCM's noise level $\sigma$ (how strong the confounding effect could be),
- Sample size $n$,
- Dimension of matrix $d\times d'$,
- Structural complexity of ground truth mask $\||M\||_0$
We have $\text{error}\leq poly(dd', \frac{1}{n}, \sigma, \||M\||_0)$. **Our Appendix B gives a more formal proof of it**. As a result, we can have some high-probability guarantees that if we want a near-optimal ($\epsilon$ error) estimation of the causal matrix $M$, i.e. if we want $\||\widehat{M}-M\||\leq \epsilon$ with high probability $1-\delta$, we may need $n(\epsilon, \delta)$ samples.
---
Rebuttal 3:
Comment: Thank you for your further response. My concerns have been well addressed and I would like to keep my positive rating. | Summary: This paper studies challenges in Model Based rl, namely:
1. Objective mismatch between estimating the model and policy learning for value optimization
2. Confoundedness in offline RL causing this mismatch
The paper proposes a theoretically sound algorithm and experiments extensively to support their claims.
Strengths: 1. Well written paper - Problem presentation is very clear
2. Experiments cover a many scenarios, and ablation studies have been provided. I'm curious why multi-task RL algorithms weren't used as baselines given the setup of the experiments.
Weaknesses: 1. Has missed some relevant related work particularly in theoretical offline RL which use pessimism based on uncertainty. Please cite : https://arxiv.org/pdf/2402.12570, https://arxiv.org/pdf/2403.11574
2. The result in Theorem 1 is applicable for a Tabular MDP, but this hasn't been discussed in the paper anywhere. The result becomes void in the case when some (s_h, a_h) along the optimal policy has not been seen in the offline dataset.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. How does the Theorem 1 fare to existing works? A detailed comparison on the improvement factors in the sample complexity would be useful.
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: There are no significant limitations of the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to express our gratitude to reviewer aSxm for their insightful feedback and acknowledging the novelty of our contributions in practice and theory. We provide our response to the questions below.
**Q1. (Additional Related works): Related works about theoretical offline RL using pessimism... multi-task RL algorithms weren't used as baselines given the setup of the experiments.**
As the reviewer suggested, we added related references to offline RL theory in the related work section.
- **Comparisons with the two related works**: The reviewers raised two relevant works about multi-task RL in offline settings with linear function approximation (linear MDPs). They consider the same sampling mechanism -- offline setting, but different model assumption (linear MDPs) and problem setting (multi-tasks) as our bilinear MDPs and action-state confounded MDPs.
- **Differences to multi-task offline RL**: Our work does not primarily target multi-task learning, which aims to solve multiple tasks jointly in an efficient way [2] by leveraging their shared information. In our case, we focus on problems with unobserved causal effects -- the training data has a distribution mismatch with testing environments due to the unobserved confounders. We didn’t include multi-task RL as baselines since they do not target our challenges about causal effects, which makes the comparisons unfair to some extent. So we focus on more related baselines -- 10 causal RL and model-based RL baselines in Section 4.1 and Appendix C.6.
>[1] Jin, Ying, Zhuoran Yang, and Zhaoran Wang. "Is pessimism provably efficient for offline rl?." International Conference on Machine Learning. PMLR, 2021.
>
>[2] Yu, Tianhe, et al. "Conservative data sharing for multi-task offline reinforcement learning." Advances in Neural Information Processing Systems 34 (2021): 11501-11516.
**Q2. (Applicability of theoretical results): The result in Theorem 1 is applicable for a Tabular MDP, but this hasn't been discussed in the paper anywhere.**
The reviewer is correct that Theorem 1 is primarily applicable for the tabular case when the state and action space are finite. We added a remark under the theorem as the reviewer suggested:
- **Our Theorem can be extended to continuous state space.** Theorem 1 provides the performance guarantees of our proposed method in tabular case with finite state and action space. It lays a solid foundation to carry out more general cases with continuous state and action space. The current results can be extended to continuous state space by using covering number argument in our key lemmas, while handling action space need more adaptation and we leave it as interesting future works.
- **Continuous space is potentially reduced to tabular cases in practice.** Our proposed algorithm can be extended to continuous state or action space via some additional derivation on feature extraction. Prior works such as VQ-VAE [1] provide powerful encoders that can tokenize a continuous visual input into some discrete latent space with finite elements. We will add more discussion on the point in the revised manuscripts.
**Q3. (Theoretical results justification): The result becomes void in the case when some (s_h, a_h) along the optimal policy has not been seen in the offline dataset.**
The reviewer is insightful that offline dataset **do** need to satisfy some properties to make offline learning possible and well-posed. To our best knowledge, the current weakest assumption for the offline dataset to ensure provably learning an optimal policy is called (clipped) single-policy concentrability assumption [2, 3] (such as Definition 1 in [3]) -- the offline dataset need to include sufficient data over the region (state-action tuples $(s_h, a_h)$) that the optimal policy can visit. Under such weakest assumption, our Theorem 1 can ensure the performance of the proposed algorithm.
>[1] Van Den Oord, Aaron, and Oriol Vinyals. "Neural discrete representation learning." Advances in neural information processing systems 30 (2017).
>
>[2] Li, Gen, et al. "Settling the sample complexity of model-based offline reinforcement learning." The Annals of Statistics 52.1 (2024): 233-260.
>
>[3] Rashidinejad, Paria, et al. "Bridging offline reinforcement learning and imitation learning: A tale of pessimism." Advances in Neural Information Processing Systems 34 (2021): 11702-11716.
**Q4. (Comparison with existing theoretical works): How does the Theorem 1 fare to existing works? A detailed comparison on the improvement factors in the sample complexity would be useful.**
We thank the reviewer for raising this question -- which is our key theoretical technical contribution (the proof pipeline for Theorem 1). **Please refer to the general response for details due to the limited space.** We provide a shortened version here:
- **Technical contributions: developed new proof pipeline.** We target a new problem --- bilinear MDPs with sparse core matrix $M$ --- brings new challenges since we need to use iterative algorithms with no closed-form solution, while prior works usually heavily depend on having a closed-form solution from the optimization problem. We develop a new proof pipeline that considers the accumulative errors during the process that we believe is of independent interest for future RL works.
- **Sample complexity comparisons with prior works.** To our knowledge, we are the first to focus on the problem —- bilinear MDPs with a sparse core matrix $M$ for causal deconfounding and generalization. So our sample complexity results can't be directly compared to existing works. But using prior works as references, it verifies the tightness of our results --- having a comparable dependency on all the salient parameters ($H$, $d$, $\xi$).
---
Rebuttal Comment 1.1:
Comment: i thank the authors for their detailed responses, as well as adding the relevant papers in the related work. I've increased my score.
---
Reply to Comment 1.1.1:
Comment: We appreciate the dedicated efforts of all the insightful reviews from reviewer aSxm. The contributive review helps improve our paper's quality during the revision and rebuttal phase. | Rebuttal 1:
Rebuttal: ## **General Response**
We thank all the reviewers for their dedicated efforts in providing valuable feedback to our work. All of the reviewers acknowledge our clarity of problem formulation and the technical soundness in bridging bilinear MDP with causality.
In this general response, we first clarify some important technical contributions, then we attach two sets of new experiment results: (1) a comparison with an advanced MBRL baseline, and (2) another experiment with raw pixel inputs.
### **Theoretical comparison with existing works**: for both technical contributions and final sample complexity results
Recall that we target bilinear MDP in the offline setting with the transition kernel $T = \phi^T M \mu$, where the core matrix $M$ is assumed to be sparse and unknown (represent the causal graph).
- **New challenges in our problems: no closed-form optimization solutions.** We aim to solve bilinear MDPs with sparse core matrix $M$, which leads to a $\ell_0$ regression problem that is usually solved by iterative algorithms without a closed-form solution. Such a closed-form situation brings daunting challenges for developing provable suboptimality guarantees since we can’t explicitly write the error term to be controlled out anymore. Existing proof pipelines in prior works for (bi)-linear MDPs can’t be applied here since they heavily depend on having a closed-form optimization solution [1, 2, 3].
- **Our new proof pipeline for non-closed form optimization problems.** Prior works using ridge regression only need to control the statistical error of the final closed-form output without focusing on the optimization process. While for our $\ell_0$-regularized optimization, as no closed-form solution can be given, we consider the accumulative errors throughout the optimization process, summing errors from each iteration to determine the final error terms. This new proof pipeline potentially generalizes to a wide range of RL problems without closed-form solutions, which we believe is of independent interest to RL theory, especially for RL algorithms involving iterative methods like Lasso.
- **Sample complexity comparisons with prior works.** First, we would like to highlight that we target a different problem —- bilinear MDPs with a sparse core matrix $M$ for causal deconfounding and generalization. To our knowledge, we are the first to focus on this problem, so our sample complexity results can't be directly compared to existing works. As suggested by the reviewer, we will include the sample complexity results of existing works targeting bilinear MDPs [1-3] as references, verifying the tightness of our results. Specifically, we denote the size of the latent space as $d$ and the planning horizon as $H$, for any $\xi$-optimal policy, the sample complexity in [2, 3] is bounded with $\mathcal{O}(\frac{H^7 d^2 \log(H^2d)}{\xi^2} \log^2(\frac{Hd}{\xi}))$ and $\mathcal{O}(\frac{H^4d}{\xi^2})$, respectively. In our work, this sample complexity is bounded with a rate of $\mathcal{O}(\frac{\|M\|_0 H^2 \log (d)}{\xi^2 } \log^2(\frac{\|M\|_0}{\xi}))$, where $\|M\|_0\ll d^2$ is the sparsity level of core matrix $M$. Our work has a comparable dependency on all the salient parameters (H, d,$\xi$) compared with prior works.
>[1] Yang, Lin, and Mengdi Wang. "Reinforcement learning in feature space: Matrix bandit, kernels, and regret bound." International Conference on Machine Learning. PMLR, 2020.
>
>[2] Du, Simon, et al. "Bilinear classes: A structural framework for provable generalization in rl." International Conference on Machine Learning. PMLR, 2021.
>
>[3] Zhang, Weitong, et al. "Provably efficient representation selection in low-rank Markov decision processes: from online to offline RL." Uncertainty in Artificial Intelligence. PMLR, 2023.
### **Additional experiments**
We illustrate the success rate (%) comparison between **MOBILE**, **MOPO** and **Ours** (BECAUSE) in the *Unlock* environments with six evaluation settings. We run experiments with 10 random seeds and report the mean and 95% confidence interval with t-testing.
We can see some improvements of MOBILE over MOPO under some OOD settings, yet the gap between MOBILE and our methods is still significant.
|Env|MOPO|MOBILE|Ours|Env|MOPO|MOBILE|Ours|
|-|-|-|-|-|-|-|-|
|Unlock-I-R|21.5(1.9)| 15.9(1.0) | **32.7(2.8)** | Unlock-O-R | 16.6 (1.3) | 12.8(0.8) | **27.6(2.0)** |
|Unlock-I-M|84.8(5.1)| 72.4(1.7) | **98.0(4.9)** | Unlock-O-M | 39.5(4.7)| 40.7(1.8) | **68.8(1.5)** |
|Unlock-I-E|88.8(4.6)| 78.3(1.2) | **97.4(1.0)** | Unlock-O-E | 39.9(4.4) | 45.6(2.1) | **82.1(6.5)** |
----------
We experiment on **RGB images** instead of vector observation of *Unlock*, and compare our method with two of our baselines, ICIL (model-free) and IFactor (model-based). We run experiments with 10 random seeds and report the mean success rate (%) and its 95% confidence interval with t-testing. All the methods have performance decay compared to the vector state setting, yet our method can still prevail under visual inputs:
|Env | ICIL | IFactor | Ours | Env | ICIL | IFactor | Ours |
|-|-|-|-|-|-|-|-|
|Unlock-I-R|0.8(0.8) | 4.3(1.1) | **15.7(3.3)** | Unlock-O-R | 1.5(1.8) | 4.7(1.6) | **5.9(0.9)** |
|Unlock-I-M|5.3(2.0) | 30.2(4.1) | **62.0(4.6)** | Unlock-O-M | 8.6(4.2) | 15.4(2.4) | **71.6(9.1)** |
|Unlock-I-E|8.7(3.4) | 34.0(4.8) | **63.7(3.9)** | Unlock-O-E | 17.1(4.2) | 16.7(3.1) | **73.6(19.5)** |
In addition to the above general response, we also illustrate some tables that discuss related work, additional experiment details and new illustration examples in the one-page supplementary material.
Pdf: /pdf/efd9f1b634a271fbd33ea06869410ae1e32f51fe.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | Summary: The paper proposes BECAUSE, an algorithm designed to address the objective mismatch between model and policy learning in offline model-based reinforcement learning (MBRL). The algorithm first models the spurious correlations between the current state s and the current action a, as well as between the next state s' and the state-action pair (s, a), through the formulation of action-state confounded Markov Decision Processes (ASC-MDP). Based on this formulation, a compact method for learning causal representations is introduced. Once these causal representations are acquired, they are employed in both world model learning and planning to enhance the algorithm’s robustness and generalizability.
Strengths: The paper is well-written, proposing their intuition backed by three clearly laid out steps to walk the readers through the working of the algorithm. The insight of mitigates the objective mismatch with causal awareness learned from offline data is novel. The paper also provides a thorough theoretical analysis, including proofs of error bounds and sample efficiency.
Weaknesses: The method relies on certain assumptions, such as the invariance of causal graphs, which might not always hold in real-world scenarios. Additionally, modeling the causal relationships with a linear structure may be overly simplistic, potentially limiting the accuracy and effectiveness of the algorithm in capturing more complex dependencies.
Technical Quality: 4
Clarity: 4
Questions for Authors: Is this method efficient in handling continuous actions? During the planning phase, computing the argmax Q can be time-consuming.
Limitation: The proposed method introduces additional complexity to the MBRL framework, which may pose challenges for practical implementation and scalability.
Confidence: 4
Soundness: 4
Presentation: 4
Contribution: 4
Limitations: none
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely thank reviewer 8KHa for their insightful and inspiring feedback. We are glad to know that the reviewer recognizes the novelty of our contributions, the clarity of our problem formulation, the theoretical contributions, and the empirical evidence that shows the advantages compared to baselines. We provide our response to the questions below.
**Q1. (Assumption on Invariant Causal Graph and Bilinear Structure): The method relies on certain assumptions, such as the invariance of causal graphs, which might not always hold in real-world scenarios. Additionally, modeling the causal relationships with a linear structure may be overly simplistic, potentially limiting the accuracy and effectiveness of the algorithm in capturing more complex dependencies.**
We thank the reviewer for this insightful discussion on our assumption. We elaborate more on the generality of invariant causal graph assumption and the bilinear representation structures.
1. Our assumption of invariant causal graph targets on the generalizable RL setting when the training and testing environments have distribution shift but share the underlying **cause-and-effect relationship**. This assumption is commonly made in many prior causal RL works, such as [1, 2, 3]. This causal invariance setting is actually applicable in many real-world decision-makin tasks like indoor navigation, autonomous driving and manipulation, as is shown in our experiment.
2. Regarding the over-simplification concern about our bilinear structure in MDP, the nonlinearity of the transition dynamics can be captured by the feature encoder $\phi(s, a), \mu(s')$. In more scalable contexts such as the foundation model, linear structure is also commonly used in CLIP [4], where the inner product is applied in the latent space.
>[1] Zhang, Amy, et al. "Invariant causal prediction for block mdps." International Conference on Machine Learning. PMLR, 2020.
>
>[2] Wang, Zizhao, et al. "Causal Dynamics Learning for Task-Independent State Abstraction." International Conference on Machine Learning. PMLR, 2022.
>
>[3] Zhu, Wenxuan, Chao Yu, and Qiang Zhang. "Causal deep reinforcement learning using observational data." Proceedings of the Thirty-Second International Joint Conference on Artificial Intelligence. 2023.
>
>[4] Radford, Alec, et al. "Learning transferable visual models from natural language supervision." International conference on machine learning. PMLR, 2021.
**Q2. (Continuous action): Is this method efficient in handling continuous actions? During the planning phase, computing the argmax Q can be time-consuming.**
We thank the reviewer for asking this important question in implementing the planner. As the reviewer mentioned about the planning phase, our method theoretically uses the following pessimistic value iteration:
$$
\overline{Q}(s, a) = r(s, a) - E_\theta(s, a) + \sum_{s'\in \mathcal{S}} \widehat{T}(s'|s, a) \widehat{V}(s').
$$
In practice, the above formulation is applicable to continuous actions we use model predictive control (MPC). We first uniformly sample random actions in the action space $a_{sample}\in \mathcal{A}$, then roll out future states by imagination using the learned dynamics model $\hat{T}(s'|s, a_{\text{sample}})$. Finally, we will have a set of Q functions $\overline{Q}(s, a_{\text{sample}})$ and take $a = \arg\max_{a_{\text{sample}}\in \mathcal{A}} \overline{Q}(s, a_{\text{sample}})$ as the selected actions to rollout the next step.
We denote the number of random action samples per step as "*planning population*", which is a fixed hyperparameter for all the methods in experiment environments in Appendix Table 9.
**Q3. (Scalability): The proposed method introduces additional complexity to the MBRL framework, which may pose challenges for practical implementation and scalability.**
We agree that our method adds additional structure on top of traditional MBRL in exchange of better OOD generalizability. To further demonstrate the scalability and computational efficiency of our method, we conduct additional experiments with **raw-pixel inputs in the Unlock environments** (see supplementary Figure (c) for details). The RGB input has a shape of (192, 192, 3) and we encode them with 3-layer CNN. The results in the **second table of the general response/Table III in the supplementary** show that BECAUSE still outperforms ICIL and IFactor baselines with visual inputs by a clear margin.
We also evaluate the inference speed of our method with both visual inputs and attach details in the one-page PDF. We believe the inference speed (Table IV in supplementary) is still in the acceptable range with a comparable inference speed as IFactor (>20 FPS on image data).
In conclusion, the empirical evidence verifies (a) the scalability of BECAUSE and (b) the computational efficiency with high-dimensional inputs. | null | null | null | null | null | null |
Resolving Discrepancies in Compute-Optimal Scaling of Language Models | Accept (spotlight) | Summary: This paper aims to resolve the discrepant conclusion about compute-optimal model size drawn from (Kaplan et al., 2020) and (Hoffmann et al., 2022), two famous scaling-law papers that have guided the exploration of early large language models. The authors concluded that the discrepancies arise from last-layer FLOPS and warmup steps. And contrary to the claims of (Hoffmann et al., 2022), learning rate decay is not essential. As a byproduct, the authors also develop strategies to select hyperparameters, including optimal learning rates and batch sizes, and emphasize the need to use larger beta_2 when training with small batch sizes using Adam.
Strengths: 1. This paper targets an interesting and important topic on the discrepancy of scaling laws. Given that scaling laws guide the design decision to train modern large language models, solid results on this topic can confirm appropriate scaling strategies and help consolidate the reliability of scaling laws for applications such as large-scale performance prediction.
2. This paper presents extensive and detailed experiments to show how to align two scaling laws for compute-optimal model sizes.
2. The discussion on hyperparameters selection, including learning rates, batch sizes, warmup durations, and beta_2 of Adam provide helpful guidance on reproducing scaling law results.
Weaknesses: 1. Discrepancies in scaling laws indicate concerns about the correctness of scaling laws. Most of the paper discusses how to resolve the discrepancies between two scaling laws but has not fully addressed the concerns about the correctness of the two scaling laws as only limited results (only Fig. 4 with one dataset) show resolving the discrepancies generally leads to accurate prediction in large-scale experiments.
2. The conclusion of this paper is not clear enough. It would be clearer if the authors could conclude a standard pipeline to reproduce the scaling law experiments and enable reliable predictions with this pipeline.
3. This paper focuses on L(C), the compute optimal scaling laws, instead of L(N, D), which is a more general scaling laws that consider both model sizes and training data. Given that the latest large language models are overtrained to reduce inference costs, limiting to the compute-optimal setting may be less impactful.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. One of my concerns on scaling laws is that the need for careful hyperparameter selection may impede its applications. Firstly, it makes fitting a law extremely complex, and making mistakes that lead to unreliable predictions is easy. Also, the need to tune hyperparameters increases the costs to fit a law, indicating that we need accurate extrapolation to orders of larger magnitude to pay off the costs. I would like to learn about the authors' comments on potential strategies to stabilize the process and reduce the costs of hyperparameter tuning.
2. Given that many scaling laws exist, and they are discrepant possibly because of different experimental settings or assumptions, could you summarize the condition for the scaling laws in this paper?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: This paper only explores the compute-optimal settings and provides limited extrapolation results based on their findings (only one dataset and only scales up to 3e19 flops).
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the detailed review, finding that our paper presents extensive and detailed experiments and provides helpful guidance on reproducing scaling law results. We address the weaknesses and questions below - the main issue appears to be a lack of clarity regarding which scaling law (Kaplan or Hoffmann) is “correct.” We hope our answers clarify this, and we look forward to engaging in constructive discussion about how to introduce these clarifications to the revision best.
**Unclear which scaling law is correct**
Conclusively deciding the correctness of compute-optimal scaling laws is a difficult problem due to computational barriers. Nevertheless, our paper adds to a growing body of evidence that Hoffmann et al. (“Chinchilla”) scaling law is more accurate than the Kaplan et al. scaling. Let us explain this in detail:
* The difficulty of validating scaling laws. Directly testing the prediction of a compute-optimal scaling law at the largest compute budget in which one can afford to train is almost impossible by definition, as such verification would require multiple largest-scale training runs. Indeed, neither Kaplan et al. nor Hoffmann et al. directly verify the compute-optimality of extrapolations of their scaling laws.
* Theoretical evidence in favor of the Hoffmann et al. scaling law. The compute-optimal exponent $\alpha=0.5$ found by Hoffmann et al. has the elegant interpretation that model size and data size should scale with constant proportion. Recent theoretical work has proven the same scaling holds for simple, analytically tractable settings under fairly broad assumptions. The paper “4+3 Phases of Compute-Optimal Neural Scaling Laws” by Paquette et al. shows this for a random-feature linear regression setting, while the paper “Information-Theoretic Foundations for Neural Scaling Laws” by Jeon and Van Roy shows this for data generated by infinite-width 2 layer ReLU networks using information-theoretic arguments. These works hint at (and, in the former case, explicitly conjecture) the possibility of the optimality of proportional scaling being a broader, universal phenomenon.
* Prior empirical evidence in favor of the Hoffmann et al. scaling law. Hoffmann et al. nevertheless provide strong evidence in favor of their scaling law - the Chinchilla 70B model outperformed the larger Gopher model that was trained using similar compute, architecture, and data, as well as other other larger contemporary models. As we discuss in our related work section, in language modeling, subsequent work [12, 22] has produced scaling laws more similar to Hoffmann et al., and other papers rely on the validity of this scaling as a starting point [14, 29]. Therefore, we believe that - for the specific context of next token prediction with Transformer decoder-only models - it is established that the Hoffmann et al. scaling law is the more accurate one; what was missing prior to our work is an explanation for why this is so.
* New evidence in favor of the Hoffmann et al. scaling law from our work. Our paper attributes the difference between the Kaplan and Hoffman scaling laws to what are arguably methodological errors in Kaplan et al.: Incorrect FLOP counting, overly-long warmup period, and suboptimal optimizer configuration. Furthermore, we show that the Hoffmann scaling law extends to non-decayed learning rate schedules. In addition to providing useful lessons for future scaling law studies, our findings further affirm that the Hoffmann et al. scaling is the “more correct” one.
**Unclear conclusion from the paper**
We hope the discussion above helps position our paper in context and thus clarify its conclusion. Moreover, our final reproduction of the Hoffmann scaling constitutes a well-defined pipeline that enables reliable predictions - the training code is based on the open-source open_lm library, and all of the configurations and analysis code are included in our submission.
**Limited impact of compute-optimal setting**
We agree that studying over-trained language and more broadly the individual effect of scaling $N$ and $D$ are important topics that warrant further research. However, note that the very concept of overtraining is defined relative to compute-optimal training. Thus, a better understanding of the compute-optimal setting is an essential stepping stone for broader studies as well. In particular, our conclusions about FLOP counts, warmup, and hyperparameter tuning should be relevant beyond the strictly compute-constrained setting.
**Limited study of predictive accuracy**
Regarding the concern that we study predictive accuracy only in one figure (Figure 4) and one dataset, note that Figure 5 also studies the predictive accuracy as we vary the amount of compute used to fit the scaling laws. In the attached pdf we further study predictive accuracy in two ways:
1. We reproduce Figures 4 and 5 for the OpenWebText2 dataset (this involves simply running the analysis code attached to our submission with different parameters).
2. Following suggestions from Reviewers zsos and CKuz, we trained a model for ~8e19 FLOPs (4x our previous largest budget) under our predicted compute-optimal settings and hyperparameters. We show that the resulting loss agrees fairly well with our extrapolation in Figure 4, with further improvements as we fit it on slightly larger budgets of up to 1.6e18 FLOPs. Moreover, we confirm the optimality of our predicted learning rate and batch size - see the response to reviewer CKuz for details.
Thus, even though the focus of our paper is explaining the scaling law discrepancy and not measuring predictive accuracy, we study the latter to a greater extent than the influential papers of Kaplan et al. and Hoffmann et al. We nevertheless agree that the two extensions described above will strengthen our paper and will include them in the revision.
(Due to length constraints we answer the two questions in the review in the comment below.)
---
Rebuttal 2:
Comment: **Question 1: hyperparameter sensitivity**
As we remark in the limitations section, we believe that scaling up the models reduces optimization hyperparameter sensitivity, which would explain why Hoffmann et al. were able to obtain their scaling law without very careful parameter tuning. Moreover, Figure 10 in our paper shows that scaling up the model size leads to higher robustness to learning rate and batch size.
Finding ways to make smaller-scale models also require less per-scale tuning is an important problem - maximal update parameterization (𝜇P) is a promising direction still under active research; we will discuss it in the revision.
**Question 2: Condition for scaling laws**
We focus on decoder-only Transformer models, trained for next token prediction with the log loss. This focus is the same as in Kaplan et al. and Hoffmann et al., the two papers that presented the scaling laws in question. Beyond that, we argue that the discrepancy between the two scaling laws is not due to a difference in assumptions, but rather due to differences that are largely methodological errors in Kaplan et al.
---
Rebuttal 3:
Comment: Dear reviewer,
Before the author-reviewer discussion period ends, we would like to know if our responses above fully address your questions and concerns. If so, we kindly ask you to reconsider our paper’s score. | Summary: This paper studies the discrepancy between the scaling laws for Kaplan et al. and Hoffmann et al. and identifies three factors behind it: last taking last layer cost into account, warmup duration, and tuning optimizer hyperparams with model scale. They show the transition from Kaplan et al. scaling laws before correcting for these effect to Hoffmann et al. scaling laws after correcting for these effects.
Strengths: The paper studies the important question of scaling laws for language models thoroughly (with the limitation of model scale). This fills an important missing piece in the literature and places the scaling law research on a firmer grounding. While running these experiments they also discover additional interesting observations such as the importance of tuning beta2 at smaller batch sizes.
Weaknesses: The main limitation of this work as also discussed by the authors is the scale of the models use (< 1B).
A more stronger instance of this limitation is that the paper uses experiments on model of size 5M to 100M to fit scaling laws for learning rate and batch size, it is unclear if these laws would hold up for large models. It would have been useful if the paper did an check of this optimality at a significantly larger scale (~1B, not just 200M). This is less of a concern for the runs in the paper since all of them have model sizes <=1B and since Adam is known to be stable (Wortsman et al. 2023) across a range of learning rates.
Technical Quality: 4
Clarity: 4
Questions for Authors: 1. Is the sweep described in appendix E done with constant learning rate or cosine decay?
2. While the papers fixing of warmup to be always less than the model size is certainly a step in the right direction as compared to Kaplan et al. is there. a reason to believe that this is optimal? or that the scaling law would not change with change with the warmup duration?
Confidence: 4
Soundness: 4
Presentation: 4
Contribution: 4
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the useful questions and kind words. We were glad to read that you find our paper studies the important question of scaling laws thoroughly and fills an important missing piece in the literature. Below, we address the weaknesses and questions raised in the review.
**Scale of experiments**
Following the reviewer’s suggestion, we extend the training of our largest model (with 901M parameters) to ~14B RefinedWeb tokens (the compute-optimal value for that model size according to our scaling law), and test whether our predicted batch size and learning rate are optimal at this scale. In particular, we compare the learning and batch size prescribed in Table 4 (0.0024 and 640, respectively) to 8 configurations, each varying either the learning rate or the batch size. We evaluate the models on held-out RefinedWeb data and calculate the validation loss and standard deviation due to sampling as described in L583. We tabulate the results below:
| Learning Rate | Validation Loss |
|----------------:|:------------------|
| 0.0006 | 2.962 ± 0.002 |
| 0.0012 | 2.947 ± 0.002 |
| 0.0024 | 2.943 ± 0.002 |
| 0.0048 | 2.964 ± 0.002 |
| 0.0096 | 2.991 ± 0.002 |
| Batch Size | Validation Loss |
|-------------:|:------------------|
| 160 | N/A |
| 320 | N/A |
| 640 | 2.943 ± 0.002 |
| 1280 | 2.955 ± 0.002 |
| 2560 | 3.013 ± 0.002 |
These results indicate that we accurately predict the optimal hyperparameters at the ~1B scale. The runs with batch sizes 160 and 320 did not complete by the rebuttal deadline, and we will attach them as they arrive; intermediate loss evaluations suggest they will not outperform our predicted optimal batch size.
**Question 1: learning rate scheduler in sweep**
We use a constant learning rate in our sweep experiments, with a warmup duration of N (number of parameters) tokens.
**Question 2: warmup heuristic**
To address the reviewer’s question, we perform a small-scale experiment where we train a $N$=108M parameter model on the RefinedWeb dataset for $20N=$~2.1B tokens, using the hyperparameters in Table 4 (for the appropriate model size) but varying the warmup duration from $N/4$ to $16N$. We evaluate the models on held-out RefinedWeb data and calculate the validation loss and standard deviation due to sampling as described in L583. We tabulate the results below:
| Warmup tokens | Validation loss |
|:---------|:------------------|
|$N/4$ | 3.546 ± 0.002 |
| $N/2$ | 3.542 ± 0.002 |
| $N$ | 3.532 ± 0.002 |
| $2N$ | 3.528 ± 0.002 |
| $4N$ | 3.536 ± 0.002 |
| $8N$ | 3.543 ± 0.002 |
| $16N$ | 3.586 ± 0.002 |
The results show that a warmup duration of $N$ is nearly optimal, with durations of up to $4N$ achieving very similar and even slightly better results. We conclude that the training loss is fairly insensitive to the precise warmup duration as long as it stays in a reasonable range. Consequently, we believe that our scaling laws are not strongly dependent on the particular choice of warmup duration.
---
Rebuttal Comment 1.1:
Title: Complete results of batch size sweep
Comment: We tabulated the full results of the batch size sweep below:
| Batch Size | Validation Loss |
|-------------:|:------------------|
| 160 | 3.050 ± 0.002 |
| 320 | 2.970 ± 0.002 |
| 640 | 2.943 ± 0.002 |
| 1280 | 2.955 ± 0.002 |
| 2560 | 3.013 ± 0.002 |
The results show that our predicted batch size is optimal among the values we have tested.
---
Rebuttal 2:
Comment: We have queued the 4 additional runs requested and hope they will complete before the discussion period is over. However, we note that Figure 10 suggests that, for larger models, the interaction between batch size and learning rate is not very strong. | Summary: The paper provides an explanation for the discrepancies between the scaling laws of Kaplan et al. and Hoffman et al. The authors start by reproducing the scaling laws of Kaplan at al. They then introduce incremental changes in the methodology: accounting for the last layer computational cost, setting a more reasonable learning rate warmup, and more thorough scale-dependent hyperparameter optimization. After introducing these changes, they recover the scaling laws of Hoffman et al. The paper provides evidence that careful learning rate decay is not necessary to resolve the differences between these two scaling laws. The paper demonstrates the importance of such scale-dependent hyperparameters for obtaining reliable scaling laws with small computational budgets.
Strengths: I believe the paper to be novel — I am unaware of prior work aiming to explain the differences between the scaling laws of Kaplan et al. and Hoffman et al., beyond the hypothesis put forth by Hoffman et al. which this paper directly addresses.
The paper is very clearly written. Its experimental set-up is well-motivated in relation to the prior work of Kaplan et al. and Hoffman et al. The experiments are thorough, and the methodology sound.
Beyond the scientific interest of studying why Kaplan et al. and Hoffman et al. arrived at such different scaling law coefficients, this paper provides valuable insights for researchers studying the scaling of language models, particularly regarding the importance of scale-dependent hyper-parameter optimization for small computational regimes.
Weaknesses: Hoffman et al. suggests that their fitted scaling law parameters differ from those of Kaplan et al. because Kaplan et al. include very sub-optimally trained models in their analysis, which bias the fitted parameters. That is, for certain (N, D, L) tuples, the loss L is substantially overestimated. They suggest that this is mainly due to the poor learning rate schedule used by Kaplan et al. Contrary to what the paper under review claims at times, I view this paper as supporting the broad hypothesis of Hoffman et al. rather than disproving it. Starting with training choices similar to those of Kaplan et al., the authors are able to reproduce their scaling law. They then show that improving these training choices — via a more reasonable warmup schedule, batch size, learning etc. — yields a scaling law that more closely resembles that of Hoffman et al. That is, the poorer training hyper parameters of Kaplan et al. explain the difference with Hoffman et al. While Hoffman et al. emphasize the learning rate schedule, I don’t read their paper as claiming that learning rate decay is necessary to obtain their scaling law —as the authors claim at times—, but rather that the poor learning rate scheduler used by Kaplan et al. is sufficient to mess up their derived scaling law. To me, the hypothesis of Hoffman et al. is consistent with the existence of different hyper parameter choices for which similar (N, D, L) is attainable with a constant learning rate — they do not claim that is is not possible.
The authors decouple “warmup” from “decay”, whereas to me the learning rate schedule comprises both. Thus, when Hoffman et al. say that they set “the learning rate schedule to approximately match the number of training tokens”, I imagine that they mean adapting both the warmup and the decay period (e.g., to use heuristics such as warmup for 5% of the total training steps). Unfortunately, Hoffman et al. do not give details on what warmup schedule they used. However, looking at Figure A1, it is clear that the warm-up period that they use is a small fraction of the total number of training steps — unlike Kaplan et al. Broadly, I find it misleading to say “Counter to a hypothesis of Hoffmann et al., we find that careful learning rate decay is not essential” when Hoffman et al. say “Our work differs from Kaplan et al. (2020) in several important ways. First, the authors use a fixed number of training tokens and learning rate schedule for all models. […] In contrast, we find that setting the learning rate schedule to approximately match the number of training tokens results n the best final loss regardless of model size” — notice the difference in language, “learning rate schedule” (which is broadly understood to include also warmup) instead of “learning rate decay”.
I ask the authors to please better scope their claim: “Counter to a hypothesis of Hoffmann et al. [21], we find that careful learning rate decay is not essential for the validity of their scaling law” — Hoffman et al. do not claim necessity, and they refer to “learning rate schedule” rather than only learning rate decay.
Lastly, the authors may be underestimating the value of learning rate decay, since they only consider models that are reasonably close to Chinchilla-optimal. When considering, for the same model size, very different number of training tokens, cosine decay may play a more important role, at least if all other hyper parameters are kept fixed. The authors are aware of this and discuss it in the limitations section.
Technical Quality: 2
Clarity: 4
Questions for Authors: For Section 3.5, do you warmup the learning rate? How so?
Hoffman et al. decay learning rate to 10% of peak, why do you use 1% of peak?
Confidence: 4
Soundness: 2
Presentation: 4
Contribution: 3
Limitations: The authors adequately discuss the limitations of their work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the detailed comments and for finding our paper novel, clearly written, and our experiments well-motivated, sound, thorough, and providing valuable insights. Below we address the main concern regarding the conjecture in Hoffmann et al., as well as the two additional questions. With these points addressed we hope the reviewer will consider increasing our paper’s score.
**The conjecture in Hoffmann et al.:**
We agree that the term “learning rate schedule” comprises both the warmup and decay components, and that it is plausible to interpret the “Modeling the scaling behavior” paragraph in Hoffmann et al. §2 as conjecturing that both components contribute to the scaling law discrepancy. However, based on Appendix A of Hoffmann et al. as well as the subsequent literature discussing this paper, we believe that Hoffmann et al. emphasize the decay component of the learning rate schedule in their conjecture.
_A closer look at Hoffmann et al. Appendix A:_ In §2, Hoffmann et al. refer to Figure A1 to provide evidence for the claim that “setting the learning rate schedule to approximately match the number of training tokens results in the best final loss regardless of model size.” However, in Figure A1 they only vary the decay component of the learning rate, keeping the warmup. Hence, it appears that — in their conjecture regarding the scaling law discrepancy — Hoffmann et al. equate learning rate schedule with learning rate decay.
_Subsequent literature:_ The following sources treat the conjecture of Hoffmann et al. as pertaining to only learning rate decay, suggesting that the above interpretation of Hoffmann et al.’s conjecture is common in the literature:
* Hu et al. [44, §4.1] write that
> … Hoffmann et al. (2022) make a key observation that setting $T > S$ results in dropped performance while setting $S = T$ results in improved training efficiency, confirming that the learning rate shouldn’t be kept high throughout the training.
They do not mention aspects of the learning rate schedule other than the decay period.
* In “Scaling laws in compute-optimal training” (to be cited in the revision) Hägele et al. write that
> Importantly, the Chinchilla project (Hoffmann et al., 2022) showed that the cosine schedule achieves optimal loss only when the cycle length matches the training duration, but underestimates the model performance during training.
The cosine cycle length controls the learning rate decay, not the warmup.
* A LessWrong blog post titled “New Scaling Laws for Large Language Models” discusses the scaling law discrepancy and states that
> It looks like OpenAI used a single total annealing schedule for all of their runs, even those of different lengths. This shifted the apparent best-possible performance downwards for the networks on a non-ideal annealing schedule. And this lead to a distorted notion of what laws should be.
Again, no mention of warmup.
We nevertheless agree that the interpretation of Hoffmann et al.’s conjecture is subtle and requires further justification and clarification, which we are happy to add to the revision. In particular, in line 8 (the abstract) we will write “Counter to a hypothesis implied in Hoffmann et al.” and in line 30 we will remove the word “decay” from “tailoring the learning rate decay schedule”. In addition, we will add an appendix with a detailed discussion about the conjecture of Hoffmann et al., along the lines of our response above.
**Question 1: Warmup in hyperparameter sweep**
We use the same heuristic for choosing warmup duration as discussed in section 3.3, equating the warmup duration (in tokens) with the number of model parameters. We will add this detail to Appendix E.
**Question 2: Cooldown value**
We decayed the learning to 0.01 of its peak value following Gadre et al. [14], which was the source of most of the initial hyperparameters used in our experiments. Following the reviewer’s question, we ran a small-scale experiment to test the potential effect of using a different final learning rate. In particular, we train a model with 108M parameters on the RefinedWeb dataset. We use the same hyperparameter as in Table 3, with a training duration of ~2.1B tokens. We use three different values for the final learning rate at the end of the cosine decay - 0.1%, 1%, and 10% of the peak learning rate. Finally, we evaluate the models on held-out RefinedWeb data and calculate the validation loss and standard deviation due to sampling as described in L583. We tabulate the results below:
| Learning rate at the end of the run | Validation loss |
|-----------:|:------------------|
| 0.001 x peak learning rate | 3.444 ± 0.002 |
| 0.01 x peak learning rate | 3.442 ± 0.002 |
| 0.1 x peak learning rate | 3.440 ± 0.002 |
This experiment suggests that the precise level of final decay has little impact on the quality of the trained model, with no statistically significant difference between decay to 1% and decay to 10%. Therefore, we do not believe the final decay level affects the scaling laws we obtain.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response, and for the detailed discussion of the conjecture in Hoffmann et al. I maintain my positive assessment of the work.
---
Reply to Comment 1.1.1:
Comment: We are glad that our discussion of the conjecture in Hoffmann et al. was helpful and we are sure including it in the revision will also help future readers - thank you for bringing it up! The interpretation of Hoffmann et al.’s conjecture was the main weakness pointed out in the review. Now that it is resolved, and the two additional questions are answered, would you be willing to increase your score?
---
Rebuttal 2:
Comment: I think that my current score is appropriate. Thank you. | Summary: - Identifies and eliminates discrepancies between the scaling laws of Kaplan et al and Hoffman et al via three interventions: accounting for un-embed layer FLOPs, correcting warmup hyperparameters, and tuning optimizer hyperparameters.
- Also derives scaling laws for optimal learning rate and batch size as a function of model size
Strengths: - The discrepancy between the scaling laws of Kaplan et al and Hoffman et al has been a major source of confusion/frustration in scaling law research. This paper conclusively identifies + mitigates this discrepancy through multiple targeted interventions.
- They also reproduce these results on two datasets, indicating that this discrepancy is not specific to the pretraining dataset.
- Section 4 provides complementary findings that are helpful for speeding up scaling law experiments: scaling laws for optimal learning rate and batch size, role of AdamW $\beta_2$, constant learning rate schedule suffices to reproduce Chinchilla scaling laws (albeit with higher loss)
Weaknesses: The main limitation of this paper (as noted in the discussion) is that the range of models considered is fairly small compared to Hoffman et al. due to compute constraints. Nevertheless, it would be useful to have some sanity checks for checking if the estimated scaling laws for loss, learning rate, and batch size extrapolate reliably to 3B and 7B scale models.
Technical Quality: 4
Clarity: 4
Questions for Authors: - Why did you opt for the IsoFlop estimation approach (approach 2 in chinchilla) rather than the arguably more direct "approach 3" in the chinchilla paper?
- In Appendix B, you mention that the main non-negligible source of error in approximating FLOPs with 6ND is the attention-related computation. This error can be quite non-trivial for small models with large context lengths, but would be small for large models. So, how would the scaling law exponents change if the FLOP estimator accounts for attention flops?
Confidence: 4
Soundness: 4
Presentation: 4
Contribution: 3
Limitations: None
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the useful questions and the encouraging comments, recognizing that our work conclusively identifies and mitigates the discrepancy between the two scaling laws, and appreciating the value of findings for speeding up scaling law experiments. Below, we address the questions brought up in the review.
**Question 1: Approach 3 in Hoffmann et al.**
In our opinion, Approach 3 is less direct because it arrives at the scaling law for $N^\star(C)$ via an assumption about how the loss depends on $N$ and $D$. In contrast, Approach 2 and our approach directly estimate $N^\star(C)$ and fit a scaling law to it, without requiring additional assumptions on the structure of the loss. Moreover, Besiroglu et al. [7] found that Approach 3 is sensitive to the curve-fitting method which has led to inaccurate conclusions in Hoffmann et al. Therefore we decided not to use Approach 3.
**Question 2: Attention FLOPs**
The scaling law exponents do not change significantly when we account for the attention FLOPS - see Figure 6 in the appendix and discussion in lines 531–551.
**Extending experiments to larger models**
Training a model with $N=3B$ with $\rho=16.5$ (our compute optimal tokens-to-parameters ratio) would require roughly 9e20 FLOPs, which is ~36 times larger than the largest individual training run in this project. A 7B model would be 200 times more expensive. Since verifying the optimality of hyperparameters requires several such training runs, such experiments are beyond our current resources. Consequently, we conducted a more modest scale-up to ~8e19 FLOPs, which is ~4 times the largest budget in our original FLOP grid. As discussed in our response to Reviewer Kjsg, our predicted learning rate and batch size are near-optimal even at that scale. Furthermore, as Figure 3 in the attached pdf shows, the loss remains fairly predictable as well.
---
Rebuttal Comment 1.1:
Comment: Ok, thank you. I'd like to keep my score as is, great work! | Rebuttal 1:
Rebuttal: We attach a file containing the relevant figures for the rebuttal.
Pdf: /pdf/0a110bb88f3dfe62031f3a52c909cf03a43cbe1a.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Unveiling Induction Heads: Provable Training Dynamics and Feature Learning in Transformers | Accept (poster) | Summary: This paper explores the theoretical foundations of in-context learning (ICL) within transformer architectures, particularly focusing on how various components contribute to ICL. The study examines a two-attention-layer transformer trained on n-gram Markov chain data, analyzing its role in ICL through gradient flow convergence to a limiting model that performs a generalized "induction head" mechanism. The key contributions include a rigorous theoretical analysis, highlighting the distinct roles of transformer components: the first attention layer as a copier, the feed-forward network with normalization as a selector, and the second attention layer as a classifier. The authors identify three phases of training dynamics and validate their theoretical findings with simulation experiments. This work provides a comprehensive understanding of how ICL is facilitated by the synergistic contribution of transformer components, bridging the gap between empirical observations and theoretical underpinnings, and offering valuable insights into transformer model design and training for complex tasks.
Strengths: - The introduction of a generalized "induction head" mechanism and the detailed analysis of training dynamics in a two-attention-layer transformer trained on n-gram Markov chain data represent a creative combination of existing ideas, applied in a fresh and impactful way.
- The theoretical analysis seems rigorous and well-supported by mathematical derivations and proofs. The authors provide a clear and logical progression from the problem setup to the main results, ensuring that each step is justified and well-explained. Though I did not check the appendix with details, the main content appears robust.
- The paper is well-organized and clearly written, making complex concepts accessible. The use of diagrams and mathematical notation is appropriate and aids in the understanding of the key ideas.
Weaknesses: - The work primarily focuses on the gradient flow of training dynamics. It would be beneficial to analyze gradient descent training to better align with practical implementations.
- The paper could better contextualize its contributions by providing a more detailed comparison with prior work, such as [1]. This comparison should include aspects such as optimization methods, model structure, input embedding, and loss function. A comparison table is suggested to highlight differences and similarities clearly.
- The paper makes several assumptions, particularly in the theoretical analysis. For example, it heavily relies on Relative Positional Embedding. Discussing the potential limitations these assumptions impose and how they might affect the generalizability of the results would be helpful.
- The paper provides guarantees on training dynamics, but it would be beneficial to demonstrate the generalization ability of the proposed approach.
- In line 318, there is a typo: it should be $\partial_t a(t) \asymp e^{a(t)}$.
[1] Nichani, Eshaan, Alex Damian, and Jason D. Lee. "How transformers learn causal structure with gradient descent." arXiv preprint arXiv:2402.14735 (2024).
Technical Quality: 3
Clarity: 3
Questions for Authors: - Can you provide empirical evidence or theoretical guarantees regarding the generalization ability of your model?
- Do you plan to include additional experiments to validate your theoretical findings on more diverse datasets or more complex tasks?
- In what sense do you use autoregressive transformers in your analysis?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors mention in the paper that Section C.1 contains a discussion of limitations, but this section is either missing or not clearly labeled.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your constructive comments. Please see our response below.
**Gradient flow analysis**: We focus on gradient flow to simplify the theoretical analysis, as it is a common approach for understanding the training dynamics of complex models.
We believe that the results can be extended to gradient descent with a sufficiently small step size.
Given that our model includes two layers of multi-head softmax attention and feedforward neural networks, analyzing the training dynamics is challenging even with gradient flow.
Furthermore, we use gradient descent in our experiments, and the results align with our theoretical findings. This demonstrates that our gradient flow analysis is consistent with the gradient descent implementation.
**Comparison to Nichani et al. (2024)**:
The major distinctions between our work and Nichani et al. (2024) lie in the model structure and data structure.
We will include the following table for a clear comparison to Nichani et al. (2024).
| | Model structure | Causal data structure | Optimization methods | Input Embedding | Loss function |
|------------------------------------------|------------------------------------------|-----------------------|----------------------|----------------------------|---------------|
| Our work | Two layers of multi-head attention with FFN and layer normalization | Have $n$ parents | Gradient flow | RPE, Word embeddings | Cross entropy |
| Nichani et al. (2024) | Two layers of single head attention | At most one parent | Gradient descent | Positional Embedding, Word embeddings | Cross entropy |
**Relative Positional Encoding**:
Our findings highlight the benefits of using Relative Positional Encoding (RPE). Previous works have shown that transformers with RPE outperform those with standard positional encoding. For instance, Allen-Zhu et al. (2023) demonstrate that transformers with RPE are more effective at learning hierarchical language structures.
In our work, as we consider an $n$-gram data model, RPE is a suitable choice for positional encoding. Specifically, RPE has been utilized to effectively capture potential parent sets, thereby enhancing the length generalization ability of the model. As long as the model accurately identifies parent sets with RPE, it will also perform well when tested on input sequences of varying lengths $L$.
While RPE may not be the best choice for different datasets, a generalization study of RPE is beyond the scope of our current work.
**On Generalization ability**:
- *Generalization to test data*:
The generalization capabilities are tied to the model's ability to infer the true parent sets from the data. Our results demonstrate that the parameters are updated to maximize the modified chi-square mutual information. We find that the modified chi-square mutual information is an effective metric for identifying the parent sets as described in Section 3.3. For instance, when
the number of parents is known a priori, the modified chi-square mutual information is maximized by the true parent set.
We have conducted additional experiments to evaluate the model's generalization performance on sequences having lengths different from the training data.
Please see the attached pdf for the experimental results under the general rebuttal.
These additional experiment results indicate that the trained model successfully captures the true parent sets, implying the strong performance on test data. Even if the test data have a different distribution or length
$L$, the model will generalize well as long as they maintain the same parent structure as the training data.
- *Generalization to other types of data/model*:
The investigation of data structures beyond the $n$-gram model and more general transformer model is beyond the scope of our current study.
We believe it is an important direction for future work to study the training dynamics of transformers in more general settings.
**On Additional experiments**:
The goal of our paper is to examine the induction head mechanism of transformers within the $n$-gram data model framework. The $n$-gram model is widely used to evaluate transformers' capabilities in language modeling from both experimental and theoretical perspectives. Our experiments show that the empirical results align with our theoretical findings that the model implements an induction head.
We leave it as future work to explore the model's performance on more complex tasks and datasets.
**Autoregressive Transformer in analysis**:
We want to clarify that our model is autoregressive in that it can predict the $(L+1)$-th token based on the previous $L$ tokens by outputing a distribution over the possible tokens. Importantly, $L$ can be arbitrarily large, provided it is sufficiently large to capture the context needed for accurate predictions.
**Limitation**: We will include discussion on the limitations of our work in the revision.
**Typo**: Thank you for pointing out the typo. We will correct it in the revision.
**Reference**
Allen-Zhu, Zeyuan, and Yuanzhi Li. "Physics of language models: Part 1, context-free grammar." arXiv preprint arXiv:2305.13673 (2023).
---
Rebuttal Comment 1.1:
Comment: Thank you for your detailed responses. I have no further questions and will keep my score.
---
Reply to Comment 1.1.1:
Comment: Thank you for your appreciation of our work | Summary: In the paper, the authors analyze a simplified two-layer transformer model trained on mixtures of Markov chains, proving that gradient flow converges to a particular configuration of copier-selector-classifier network, that implements a generalized induction head mechanism. The authors further provide numerical experiments validating their findings.
Strengths: The paper is well motivated and well written. The introduction of the generalized induction head estimator is original and effective. I checked the most important proofs and they look sound to me.
Weaknesses: The study of full transformer models is certainly extremely complex, and the introduction of simplifications in the model is necessary for an effective theoretical study of the network. However, one simplification that might prevent an actual applicability of the authors' findings in real-world scenario is the complete removal of the word embedding layer, and the fact that the first attention matrix is a function of only the relative positional embeddings, without any information on the actual input tokens. Nonetheless, I believe that the study of the generalized induction head estimator is enough for the paper being worthy of publication in the conference.
Technical Quality: 4
Clarity: 3
Questions for Authors: As mentioned above, I am a bit concerned about the removal of the word embedding layer and the fact that the first attention matrix is a function of just the positional embeddings. Is this simplification really necessary? What would break down in the proof with the introduction of word embeddings? Is there any hope to prove a similar result with word embeddings, and maybe with absolute embeddings, that to the best of my knowledge are used in all current state-of-the-art transformer models?
Confidence: 3
Soundness: 4
Presentation: 3
Contribution: 3
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your constructive comments. Please see our response below.
**First layer word embeddings**:
As stated in our general rebuttal, we study a simplified model for a better understanding of the functionality of each component in detail.
Although the first attention layer does not contain trainable word embeddings, it can be considered as
$$
\sigma(XWX + W_P)X,
$$
where we fix the word embeddings $W \in \mathbb{R}^{d \times d}$ as a random matrix with Gaussian entries with mean $0$ and variance $1/d$.
This ensures that $x_l^\top Wx_{l'} \approx 0$ for all $l$ and $l'$ when the dimension $d$ is large. It can be interpreted as fixing $W$ at the initialization of the training, and it is common practice to set it as a random Gaussian matrix. Similar arguments can also be found in Bietti et al. (2023).
We remark that theoretical analysis of training dynamics becomes challenging when parameters for word embeddings are included. Previous works, such as Nichani et al. (2024) and Jelassi et al. (2022), often consider only positional embedding in the attention layer to focus on the positional information. Also, it has been empirically observed that the first attention layer's QK matrix has almost zero values for the token embedding parts during training (Nichani et al., 2024). This suggests that the word embeddings do not contribute to the induction head mechanism.
For our problem, when we incorporate trainable word embeddings, theoretical analysis of the training dynamics in Stage 2 becomes much more complicated. Since the first attention layer depends on both positional and word embedding weights, the dynamics of $w^{(h)}$ become intertwined with the dynamics of the word embedding weights. Then, simultaneous tracking of both dynamics is necessary, resulting in no closed-form solutions for these stochastic differential equations. A comprehensive analysis of the dynamics involving word embedding weights may be addressed in future work.
**References**
- Bietti, Alberto, et al. "Birth of a transformer: A memory viewpoint." Advances in Neural Information Processing Systems 36 (2024).,
- Nichani, Eshaan, Alex Damian, and Jason D. Lee. "How transformers learn causal structure with gradient descent." arXiv preprint arXiv:2402.14735 (2024).,
- Jelassi, Samy, Michael Sander, and Yuanzhi Li. "Vision transformers provably learn spatial structure." Advances in Neural Information Processing Systems 35 (2022): 37822-37836.
---
Rebuttal Comment 1.1:
Comment: Thank you for your insightful comment. I will keep my positive score.
---
Reply to Comment 1.1.1:
Comment: Thank you for your appreciation of our work. | Summary: This research focuses on analyzing in-context learning (ICL) in transformer models specifically trained on n-gram Markov chain data. It analyzes how the components of the transformer, attention layers and feed-forward networks, contribute to processing and learning from this type of data, demonstrating their roles through simulation experiments.
Strengths: The strength of this study lies in its theoretical analysis that connects specific roles, such as 'copier' and 'selector,' to each layer of the transformer when processing n-gram Markov chain data.
Weaknesses: The analysis is specifically limited to n-gram Markov chain data, which may not reflect the complexity or diversity of data typically processed by language models. Furthermore, the structure of the feed-forward network is unusually specialized, and the learning process is divided into three distinct stages, which is not typical for standard transformer training. Additionally, the role of the attention mechanism is reduced merely to specifying positions within the n-gram model, diverging from its broader and more dynamic functionalities in conventional transformer applications. This specificity and simplification may limit the generalizability of the findings to more standard scenarios.
Another weakness of the study is the use of the Modified Chi-squared Mutual Information (MI) metric without sufficient explanation of its significance or relevance to real-world applications.
While Chi-squared MI may theoretically is used in the total validation analysis, it difficult for readers to grasp the practical implication to use the metric. Additionally, although the following existing study analyze the learning of induction heads in bi-gram models, we need the adequate explaination of the fundamental differences between these methods and the approach taken in the study without the n-gram model.
Birth of a Transformer: A Memory Viewpoint
Alberto Bietti, Vivien Cabannes, Diane Bouchacourt, Herve Jegou, Leon Bottou
NeurIPS2023
Furthermore, considering the short review periods typical of international conferences, the extensive proofs, despite its limited model, demand a high effort from reviewers.
This adds significantly to the difficulties of this paper and makes it particularly laborious for reviewers.
Additionally, the proofs provided in Appendix E.1-3 of the study frequently use the approximation symbol "\approx," which introduces ambiguity into the explanations and raises concerns about the solidity and rigor of these proofs. This frequent reliance on approximations makes it challenging to determine whether the proofs are robust and reliable. This aspect is a significant drawback, as it undermines the confidence in the theoretical claims and the overall integrity of the research findings, casting doubt on the precision and applicability of the model’s theoretical underpinnings
Technical Quality: 2
Clarity: 2
Questions for Authors: Q1: Could you clarify the differences between this paper and prior work in the field? Specifically, the paper seems to focus on bi-gram models; how challenging would it be to extend this analysis to n-gram models? What are the main hurdles in scaling the approach from bi-grams to n-grams?
Birth of a Transformer: A Memory Viewpoint
Alberto Bietti, Vivien Cabannes, Diane Bouchacourt, Herve Jegou, Leon Bottou
NeurIPS2023
Q2: What are the advantages of using the Modified Chi-squared Mutual Information (MI) for your analysis? Are there specific properties or new insights that could only be demonstrated using this metric? What unique findings does this metric enable?
Q3: In Appendix E.1-3, you frequently use the approximation symbol "\approx." Is it possible to replace these approximations with inequalities or formal big-O notation to provide a more precise and formal explanation? How would this impact the rigor and clarity of the proofs presented?
Confidence: 3
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: A limitation of this study is that the model and data settings analyzed do not fully align with real-world transformers or language models.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your constructive comments. Please see our response below.
**On $n$-gram data, simplified model & split training phases**:
We remark that the $n$-gram Markov chain data assumption is much more expressive and challenging than the bi-gram data assumption in previous works (Bietti et al, (2023), Nichani et al, (2024)).
Rigorously analyzing the training dynamics for in-context learning of $n$-gram Markov chain is a significant open problem identified by Nichani et al, (2024), let alone with a more sophisticated transformer model that also include RPE, FFN, layer normalization, and residual link.
In this work, we address these challenges and reveal that the model learns a generalized induction-head (GIH) mechanism for $n$-gram data. We split the training phases to focus on the main hurdles. Experiments on simultaneous training of all parameters match our theoretical findings (see Figure 6 in Appendix C.1). Please refer to the general rebuttal for more details on model simplification and training phase splits.
**On the modified $\chi^2$-MI**:
In the generalized induction-head mechanism, the modified $\chi^2$-MI criterion arises naturally from gradient calculations, not by design. For intuition, see Eqn (3.6) in Section 3.3, which relates the modified $\chi^2$-MI to the standard $\chi^2$-MI under certain symmetry conditions:
$$
\log \tilde I_{\chi^2}(\mathcal{S}) = \log I_{\chi^2}(\mathcal{S}) - |\mathcal{S}| \log {d}. \tag{1}
$$
This criterion enables the model to balance maximizing mutual information and minimizing model complexity by selecting parents with high mutual information with the token to be predicted.
For instance, selecting a larger parent set in the GIH mechanism results in less biased prediction, while also incurring larger estimation variance.
In addition, we discuss in line 354-358 that the modified $\chi^2$-MI will become equivalent to the standard $\chi^2$-MI if the data is bi-gram, which reproduces the previous results on bi-gram data.
**On the comparison to Bietti et al, 2023**: The major differences between our work and Bietti et al. (2023) are as follows:
- *(Data structure)* Bietti et al. (2023) only consider 2-gram Markov data, while we consider general $n$-gram data. As discussed in the general rebuttals, going from bi-gram to $n$-gram data is highly nontrivial.
- *(Model structure)* Bietti et al. (2023) do not consider an MLP layer or layer normalization and only use single-head attention for their construction of the induction head mechanism. In stark contrast, we keep all these key components and show they are necessary for learning $n$-gram data and exhibit a rich interplay. As our model is more complex, the original analysis in Bietti et al. (2023) does not apply.
- *(Theoretical understanding)* Bietti et al. (2023) construct a hypothetical model with empirical evidence also on a synthetic dataset and provide an analysis for three gradient steps. Their analysis is essentially just around the initialization, which has no guarantee on the training trajectory or the limiting model. We theoretically understand the full training dynamics that lead to the emergence of the generalized induction head mechanism.
**Can we "scale up" the analysis from bi-gram to $n$-gram?**
Simply scaling up the bi-gram analysis in Bietti et al. (2023) does not work for $n$-gram data, as the original mechanism handles only bi-occurred tokens.
Let us follow the construction in Bietti et al. (2023) and show a counterexample.
Consider the simplest $3$-gram data, where each token depends on the previous two tokens as parents:
- Let $x_l = w_E(l) + p_l$, where $w_E(l)$ is the word embedding for token $l$ and $p_l$ is the positional embedding for position $l$. Assume positional embeddings are orthogonal to each other, and $w_E(l)$ is orthogonal to all positional embeddings.
- The positional information is packed into the $W_{QK}$ matrix for the first layer, i.e., $W_{QK} = \lambda \sum_{l=2}^L \sum_{s\in \{1, 2\}} p_l p_{l-s}^\top $ with $\lambda$ being a large constant.
- After the first layer's attention, the model will output
$$
x_l' = W_{OV} x_{1:L}\sigma(x_{1:l-1}^{\top} W_{QK}^{\top} x_{l}) \approx W_{OV} (x_{l - 1} + x_{l -2}) = W_{OV} (w_E(l-1)+w_E(l-2) + p_{l-1} + p_{l-2}). \tag{2}
$$
Even if this model can identify the parent sets, it fails to capture order information, i.e., distinguishing which parent is first and which is second. Therefore, without changing the model structure, the original construction in Bietti et al. (2023) fails for $n$-gram data.
**Major challenges for $n$-gram data**:
The counterexample shows why single-head attention without additional nonlinearity fails on $n$-gram data. Our work leverages multi-head attention, FFN, and layer normalization to learn history-dependent features. The main challenge lies in analyzing the dynamics of a more sophisticated model with cross-head interference and nonlinearity, which Bietti et al. (2023) did not address. We will include these discussions in the revision.
**On the proof and "$\approx$" symbol**:
We want to clarify that Appendix E contains a proof sketch instead of a formal proof, and we use the "$\approx$" symbol to facilitate understanding of the proof logic.
The complete and rigorous proofs are provided in Appendix F.
**References**:
- Bietti, Alberto, et al. Birth of a transformer: A memory viewpoint. 2023.
- Nichani, Eshaan, et al. How transformers learn causal structure with gradient descent. 2024.
---
Rebuttal Comment 1.1:
Title: Dear reviewer, please read and respond to authors' rebuttal.
Comment: This paper has very diverse reviews and it would benefit a lot to start a discussion to clarify any confusing points.
Thanks!
Your AC.
---
Rebuttal Comment 1.2:
Comment: Thank you for your response.
I have carefully read my and the other rebuttals.
As a result, I raised my score.
Please note that the presentation needs to be improved.
---
Reply to Comment 1.2.1:
Comment: We are glad that our response addressed your concern. We will incorporate your comments into the revision. | Summary: This paper theoretically explores how simplified transformers perform in-context learning (ICL) on n-gram Markov chain data. The authors analyze a two-attention-layer transformer model with relative positional embedding (RPE), multi-head softmax attention (but second layer only scalar trainable), and an FFN with normalization. They prove that gradient flow concerning a cross-entropy ICL loss converges to a limiting model that implements a generalized "induction head" mechanism. This mechanism involves copying past tokens within a window and selecting informationally parent tokens using FFN, followed by classification through the second attention layer. The theoretical findings are supported by numerical experiments.
Strengths: 1. The paper is well-written and easy to follow
2. The definition/assumption is clear. The theoretical framework using chi-square mutual information is solid and novel, within which the authors can determine the most suitable parent sets that are both informative and not so complex for handling the n-gram dependency. The conclusion of the three-stage training is also interesting. The process is theoretically provable and intuitionally clear, like first training FFN to find the most suitable parent sets, then training the first attention (RPE) to select each parent for each head, and finally tunning the second attention scalar ($a$) to serve as an exponential kernel classifier.
3. Use empirical experiments to support the theory's results.
Weaknesses: 1. The setting for the attention layer may be too simplified. For example, the authors don't consider residual connection, and they also don't consider the token similarity (he original W_K, W_Q), but only the relative position embeddings, i.e. RPE, in the first attention layer and trainable scalar $a$ in the second attention layer. The training process is manually split into three stages (first attention RPE-> FFN -> second attention scalar $a$). These may make the problem more theoretical-friendly but quite different from the real-world setting.
2. Following Weakness-1, the numerical experiments are more on synthetic datasets + simplified models rather than real-world datasets (like wiki-text) + general 2-layer transformers. Thus I doubt the potential of using this framework for more general transformer architectures, although this problem is indeed quite difficult.
Technical Quality: 3
Clarity: 3
Questions for Authors: Similar to Weakness. There are some other questions:
1. Using the second attention layer as a classifier is a little weird because people always use the last linear layer to play this role, but in your framework, you delete the last linear layer. Considering the parameter of the second attention layer is quite simplified, I doubt that the second 'attention layer' is necessary (like we don't need the attention token similarity) for getting your results. Is it possible to remove the second attention scalar but use something like a linear layer to classify?
2. Is line 247 $\psi_{S^*}(i)$ a typo? Besides what's the definition of $i_s$ in D.3? and what's the dimension ?(also a typo here?)
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: The authors don't analyze their limitations
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your constructive comments. Please see our response below.
**On the model simplification**:
- *(Residual link)*
We want to clarify that the residual link is indeed contained in our simplified model via the use of disentangled transformer. As can beseen in Eqn (2.5), the second layer’s attention uses the original input $X$, which is copied from the first layer via a residual link.
We apologize for the confusion and will make this point more clear in the revision.
- *(Model Simplification)*
Simplifications in the model are commonly employed by previous work. For example, Bietti et al. (2023) also excludes the token similarity in the first layer of their simplified model, and Nichani et al. (2024) assumes symmetry conditions on the data which makes the second layer's QK embedding matrix as $W_{QK} = a I + b \bf{1} \bf{1}^\top$, where the all one part $ b \bf{1} \bf{1}^\top$ assigns the same score among all tokens and the model is essentially the same as only using a single scalar $a$.
Please see also our response in the general rebuttal on the data/model simplification and the split of training phases.
We remark that our model still contains the core components of the original transformer, including multi-head attention with relative positional embeddings, FFN, layer normalization, and also the residual link, which distinguishes our setting from the previous ones.
Even with split of training stages, the analysis is highly non-trivial due to the cross-head interference and the nonlinearity in the model.
**On the experiments**: Our work is theory-oriented, thus we mainly conduct experiments on the synthetic data and the two-layer transformer model to validate our theoretical findings.
Moreover, as we have pointed out in the general rebuttal, the $n$-gram Markov structure is already more expressive than the bi-gram data studied in previous works (Bietti et al. (2023)).
**Questions on attention classifier**: If our understanding is correct, the reviewer is asking if we can replace the second attention layer by a linear layer for classification.
The answer is negative.
The second attention layer plays the role of a classifier that first _compares the query's feature (which contains its parents' information) with previous tokens' features_, and then _aggregates the tokens with the same parents_.
A simple linear layer cannot do this task as it requires the ability of processing sequential information. See Olsson et al. (2022) for more details on the induction-head mechanism.
Please let us know if our understanding is correct.
**On the typo**: Thank you for pointing out the typo. Yes, $\psi_{\mathcal{S}^\star}(i)$ in line 247 should be $\psi_{\mathcal{S}^\star}(l)$ instead. We will correct this in the revision.
**On the definition of the feature $\psi_{\mathcal{S}}(l)$**:
Thank you for pointing out the display issue in Eqn (D.3).
In the definition of $\psi_{\mathcal{S}}(l)$ in Eqn (D.3), which we copy in the following line,
$$
\psi _{\mathcal{S}}(l) = \left(\prod _{s\in\mathcal{S}^\star}(x _{l-s}) _{i_s}: \\{i _{s}\\} _{s\in \mathcal{S}} \subseteq [d]\right), \quad \forall l\in[L+1],
$$
each $i_s$ refers to an index of elements in vector $x_{l-s}$ for $s\in\mathcal{S}$.
Here, we have $|\mathcal{S}|$ vectors in $\mathcal{S}$ and each $i_s$ has $d$ choices. The feature $\psi_{\mathcal{S}}(l)$ is thus of dimension $d^{|\mathcal{S}|}$ as it is defined element-wise by iterating over all the indices $i_s$ for all $s\in\mathcal{S}$.
We will correct this in the revision.
**References**:
- Bietti, Alberto, et al. Birth of a transformer: A memory viewpoint. 2023.
- Nichani, Eshaan, et al. How transformers learn causal structure with gradient descent. 2024.
- Olsson, Catherine, et al. In-context learning and induction heads. 2022.
---
Rebuttal Comment 1.1:
Title: Dear reviewer, please read and respond to authors' rebuttal.
Comment: This paper has very diverse reviews and it would benefit a lot to start a discussion to clarify any confusing points.
Thanks!
Your AC.
---
Rebuttal Comment 1.2:
Comment: Thanks for the response from the authors, I think most of the content is convincing and will keep my positive score
---
Reply to Comment 1.2.1:
Comment: Thank you for your appreciation of our work and for taking the time to review it. | Rebuttal 1:
Rebuttal: # General Rebuttal to all Reviewers
We thank all reviewers for their valuable feedback. Below, we summarize our contributions and address common questions.
**Summary of contributions**:
To the best of our knowledge, our work is the first to show that through gradient-based optimization, a two-layer transformer can provably learn the induction-head mechanism on data from an $n$-gram Markov chain.
Our work features a rigorous analysis for the training dynamics of a two-layer transformer, organically incorporating key components including multi-head attention, relative positional embedding (RPE), feed-forward network (FFN), layer normalization, and residual link, while previous works only studied partial combination of these designs [Zhang et al. (2023); Huang et al. (2023); Bietti et al, (2023); Nichani et al, (2024)].
We reveal a generalized induction-head mechanism where the transformer selects a group of parents for inference based on the modified mutual information criterion. We demonstrate that this criterion naturally incurs a trade-off between model complexity and information richness.
**On the simplification**:
Given the complexity of transformer models and natural language, it is beneficial to study simplified models on synthetic data to understand the induction head mechanism while preserving the essence of real-world data and models.
In the following, we briefly discuss our assumptions on the data and the model:
- *(On $n$-gram data)* Natural language is generally considered Markovian, making the $n$-gram Markov chain a natural tool for language modeling (Coleman, 2005).
Our current study significantly improves upon the previous theoretical works that are restricted to bi-gram Markov chains (Bietti et al, (2023), Nichani et al, (2024), Makkuva et al, (2024)).
- In fact, going from bi-gram to $n$-gram is nontrivial as is discussed in Nichani et al, (2024), and our methodology is exploiting the multi-head attention mechanism with nonlinearity provided by FFN & layer normalization to learn a history-dependent feature. The rich interplay between these designs poses unique challenges for keeping track of the training dynamics.
- *(On transformer model)* We modify the original transformer model for better understanding of the functionality of each component in the transformer.
This is a common approach for theoretical studies (Bietti et al, (2024), Nichani et al, (2024)).
Nonetheless, we have kept the core components of the transformer, including multi-head attention, RPE, FFN, layer normalization, and residual link, to ensure that our findings are relevant to the real-world transformer models.
- In fact, these modifications are based on the intuition provided by previous studies. For instance, the removal of the word similarity in the first attention's QK projection weights is inspired by the fact that the first layer's QK matrix has almost zero values for the token embedding parts during training (Nichani et al, (2024)) and does not contribute to the induction head mechanism.
Rigorous study of the dynamics without simplifications like removing the token similarities is left for future work.
**On split training stages**: For a clearer theoretical understanding of the training dynamics, we split the training process into three phases. On one hand, this has a similar effect to using different learning rate for different layers, which is a standard practice in deep learning literature (Tian et al, (2023), Nichani et al, (2024), Lee et al, (2024)). On the other hand, the three-phase training paradigm can also be justified by the following facts:
- *(Theory perspective)* As is outlined between line 339-340, our results indicate a clear separation in the growth rates of different parameters in the model, which naturally gives rise to a multi-phase training dynamics where some weights like the FFN get trained first.
- *(Experiment perspective)* Experiments on simultaneously training all layers produce the same limiting model and the dynamics goes through a similar course, which also validates our theoretical findings. Please see Figure 6 in Appendix C.1 for more details.
Moreover, even with split of training phases, our dynamics analysis is still highly non-trivial given a more sophisticated model than the model studied in previous work.
In particular, we need to handle the cross-head interference and nonlinearity in FFN and layer normalization, which are not present in previous works (Zhang et al, (2023); Huang et al, (2023); Bietti et al, (2023); Nichani et al, (2024)).
The split of training phases helps us focus on the major hurdles in the dynamics analysis.
We will add more details, including the strengths and limitations in the revision when additional space is available.
**References**
- Zhang, Ruiqi, et al. Trained transformers learn linear models in-context. 2023.
- Huang, Yu, et al. In-context convergence of transformers. 2023.
- Coleman, John S. Introducing speech and language processing. 2005.
- Bietti, Alberto, et al. Birth of a transformer: A memory viewpoint. 2023.
- Nichani, Eshaan, et al. How transformers learn causal structure with gradient descent. 2024.
- Makkuva, Ashok Vardhan, et al. Attention with markov: A framework for principled analysis of transformers via markov chains. 2024.
- Tian, Yuandong, et al. Scan and snap: Understanding training dynamics and token composition in 1-layer transformer. 2023
- Lee, Jason D., et al. Neural network learns low-dimensional polynomials with SGD near the information-theoretic limit. 2024.
Pdf: /pdf/8ba2e360b94d4f689e5e03f923f59f78cab8714a.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Offline RL via Feature-Occupancy Gradient Ascent | Reject | Summary: This paper studies the offline policy optimization problem, i.e. to find a policy whose value function is close to the optimal value function using offline samples. Under the assumption of linear MDP, they proposed a gradient ascent algorithm.
The sample complexity of the algorithm only depends on the feature coverage of the best policy and does not require coverage over any other policies.
Strengths: This paper is well written. The algorithm, theorems and lemmas presented in this paper are all very clear.
The problem studied in this paper is offline policy optimization, which has significant value to the community, both empirically and theoretically.
The algorithm in this paper is simple and computationally tractable.
In contrast to other offline RL paper, this paper does not require that the offline data is sampled i.i.d. or to be admissible, and they can handle arbitrary offline data as long as the data has sufficient coverage to the best policy.
Weaknesses: Compared to Zanette (2021), the algorithm idea is somehow similar. Specifically, both algorithms use the actor-critic update, and the optimization in the algorithm of this paper is similar to the pessimism estimation in Zanette (2021).
The assumption made in this paper is the linear MDP assumption, which is stronger than the assumption in Zanette (2021).
Technical Quality: 4
Clarity: 3
Questions for Authors: Can your results extend to the case under linear Q-function and linear Bellman completeness assumptions?
Confidence: 4
Soundness: 4
Presentation: 3
Contribution: 3
Limitations: Yes. The authors addressed all the limitations listed in the guidelines.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your positive review of our work. We particularly
appreciate your relation of our method to the actor-critic style framework.
Please see our response to your remarks and questions below.
**Weaknesses**
**Q1.** Compared to Zanette (2021), the algorithm idea is somehow similar.
Specifically, both algorithms use the actor-critic update, and the optimization
in the algorithm of this paper is similar to the pessimism estimation in
Zanette (2021).
**A.** Indeed, our method is somewhat similar in spirit to the actor-critic style method of
Zanette (2021) with the critic handling $\mathbf{\theta}$ and the actor performing a sequence of softmax policy updates.
However, there are some notable differences in how the critic updates are implemented in the two methods.
The most remarkable difference is that our critic updates do not involve any explicit
form of pessimism, which makes our algorithm significantly simpler. To see this,
note that the critic update of Zanette et al. (2011) in Eq.(10) requires solving
a quadratically constrained convex optimization problem, which is arguably more
computationally intense than our critic update (which only requires minimizing a linear
function on a norm ball). Furthermore, their algorithm is specifically developed
for the finite-horizon setting and extending it to the infinite-horizon setting would
run into some well-known computational feasibility issues (as pointed out by Wei et al., 2021).
Overall, our method is quite different from regular actor-critic methods in that it places feature occupancies $\lambda$ in the spotlight -- as opposed to treating the value parameters $\theta$ and the policy updates as the main characters as done by actor-critic methods.
**Q2.** The assumption made in this paper is the linear MDP assumption, which is
stronger than the assumption in Zanette (2021).
**A** Indeed, the assumptions we make in this paper are stronger than the
$Q^{\pi}$-realizability and Bellman completeness assumption in Zanette (2021).
However, we consider the more complex infinite-horizon setting for which there
are no known sample and computationally efficient (not oracle-efficient)
methods that work under $Q^{\pi}$-realizability and Bellman completeness.
Extending our results to continue working under less restrictive assumptions
on the function approximation is an exciting open question that we had to leave
open for now.
**Questions**
**Q1.** Can your results extend to the case under linear Q-function and linear
Bellman completeness assumptions?
**A** We doubt that such extension is possible with the current setup. Our
current
results heavily exploit the linear MDP assumption to first reduce the number of
decision variables of the standard primal LP in Eq. 1 and introduce $\mathbf{\lambda}$,
then design the reduced approximate objective $\hat{f}(\mathbf{\lambda},\pi,\mathbf{\theta})$.
Extending our results to the case of $Q^{\pi}$-realizability and Bellman
completeness would require first constructing a separate yet similarly reduced
objective under these weaker assumptions. While we are unsure of how to
construct such an objective at the moment, we are
optimistic that the ideas and optimization techniques developed in the paper
can be applied to handle offline RL under $Q^{\pi}$-realizability and Linear
Bellman completeness \emph{for all policies}. As we highlight in Section 5 of
the paper, this is indeed an interesting future direction of our work.
---
Rebuttal Comment 1.1:
Comment: Thank you very much for your response. I do not have further questions. | Summary: This paper propose a new algorithm for offline reinforcement learning in linear infinite-horizon discounted MDP, which achieves a strong sample complexity under the weakest data coverage assumption. Moreover, their algorithm is easy to implement and computationally efficient. Their algorithm design is based on a reduced version of linear programming formulation of MDP, which approximately transforms the original problem into a unconstrained saddle-point optimization problem.
Strengths: 1. The algorithm proposed in this paper has many nice properties: it is computationally efficient, easy to implement, and works under the weakest data coverage assumption.
2. The developed techniques and the observations on transforming the optimization problem is insightful.
3. The paper is also well-written, with clear proof sketch and is well-positioned among related work.
Weaknesses: The paper is pretty notation heavy and a bit hard to follow. I would suggest including a table of notations with descriptions. The choices of notation can also be optimized. For example, I found the mixed use of $D_{\pi}$ and $D_{\theta}$ confusing, and they looks like they are dependent on the value of $\pi$ and $\theta$.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. is there a way to output a single policy (not a uniformly sampled policy) that achieves the same sample complexity?
2. I am wondering whether your algorithm can be extended to the stochastic shortest path setting [1, 2], which include finite-horizon MDP and discounted infinite-horizon MDP as special cases.
[1] Yin, M., Chen, W., Wang, M., and Wang, Y.-X. Offline stochastic shortest path: Learning, evaluation and towards optimality
[2] Chen, L., Jain, R., and Luo, H. Improved no-regret algorithms for stochastic shortest path with linear mdp
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Nothing necessary stands out.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your detailed review of our work. We appreciate your feedback on our notation and will consider your suggestion in the next draft of our paper. For now, please see our response to your questions below:
**Q1.** Is there a way to output a single policy (not a uniformly sampled policy)
that achieves the same sample complexity?
**A.** This is a great question, and the answer is currently unclear. The nature of the output policy allows us the
relate the duality gap (introduced in Section 4) directly to the policy
suboptimality, and importantly the average regret of all the players --
which from existing literature on convex optimization can be easily
controlled. Note that the first part would still be
straightforward if we were to output the last policy iterate for example.
However, it is currently unclear how to prove last iterate convergence of the instances of mirror descent and composite-objective gradient ascent which we use for the $\pi$-player and the $\mathbf{\lambda}$-player respectively (the challenge being that the sequence of $\theta$ variables are adversarial from the perspective of these two aforementioned players).
**Q2.** I am wondering whether your algorithm can be extended to the stochastic
shortest path setting [1, 2], which include finite-horizon MDP and discounted
infinite-horizon MDP as special cases.
**A.** Thank you for bringing up this great question! Since our approach is
based on the LP formulation of optimal control in MDPs, we believe that it should be
extensible without major hassle to all other problem variants for which optimal solutions
can be formulated as LPs. These include finite-horizon MDPs, infinite-horizon undiscounted MDPs (under the assumption
that all policies mix to their stationary distributions), and, yes, stochastic shortest-path problems. The analysis of
the resulting method should be more or less straightforward if one assumes that all policies are proper (in the sense
that they reach the terminal state after a finite number of steps on expectation). Without such assumption, however, it
is unclear if our approach could work --- this is definitely an interesting question for future work. | Summary: This paper provides an approach for solving Offline RL problems in domains where the underlying MDP problem has reward and transition models that are linearly realisable under a known feature map. The paper is well presented.
Strengths: 1. Some of the tricks employed in the approach are quite interesting.
2. The analytical results are good.
Weaknesses: 1. This work is only for MDPs where rewards and transitions are linear. Many of the moderately interesting problems are not linearly realisable, so it is quite important that authors provided a detailed discussion on how it can be addressed.
2. I am not entirely confident about this, so will wait for inputs from authors. The approach seems to be built based on works by Hong and Tiwari, and stabilisation trick [Neu and okolo, Jacobson et al.]. I was not sure on the key significant contributions of this paper on top of those works.
3. For me the biggest concern is that there are no experimental results. How would such an approach work for an MDP where the transition and reward models are not linear? Also, the approach is still approximate (given the bound), so would have been important to show the real results.
Technical Quality: 3
Clarity: 3
Questions for Authors: Apart from the questions mentioned above in weaknesses, can the authors also provide answers to the following questions:
1. Theorem 3.1 is mentioned as the most important results. However, I am not clear on why it is a significant result? Why is this bound good?
2. Also the feature coverage ratio provided by this algorithm, why is it significant? And do others also achieve this ratio for Linear MDPs only or in the more general case?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: There is no limitations mentioned.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your critical review of our work. We understand your
concerns regarding the linear MDP assumption and significance of our
contribution. Due to space constraints, we directly respond to your individual questions in the "Weaknesses" and "Questions" sections according to the provided numbering. Please see below:
**Weaknesses**
**Q1.** Indeed, in the paper we focus on the linear MDP setting.
While we agree that the linear MDP assumption is indeed restrictive
and may not be applicable in most real world problems, we have to
highlight that it is one of the most widely-studied settings
for developing efficient RL algorithms for large state
spaces with linear function approximation, employed in hundreds of RL
theory papers (many cited in our bibliography).
Within this very well-studied setting, our results are the arguably
strongest known ones, simultaneously demonstrating the best sample complexity
and computational complexity, in the challenging infinite-horizon
discounted-reward setting. We believe that this contribution (achieving the
best known result in a thoroughly studied domain) is arguably strong
enough to warrant the interest of the RL theory community. Looking forward,
we are confident that the theoretical insights achieved in this work
will be useful for developing RL methods
that work under weaker assumptions such as $Q^{\pi}$-realizability and Bellman
completeness.
**Q2.** It is true that our result builds on both the work of Hong and Tewari (2024)
and makes use of the stabilization technique of Neu and Okolo (2024) and
Jacobsen and Cutkosky (2023). We do wish to point out that our analysis has made several
other
improvements to the techniques of Hong and Tewari (2024), including:
1. The removal of the cumbersome ``nearest-point oracle'' required in their
original analysis. (Note that the latest version of their paper, online since June 2,
has apparently also managed to remove this limitation.)
2. Working with a much simpler parametrization of the $\lambda$ variables.
This allowed us to conduct a more straightforward analysis and remove some of the
strong assumptions made in their original work (e.g., requiring very strong coverage
assumptions - see for example the derivations in their Section D.3).
3. This refined analysis, together with stabilization, allowed us to also remove the
need for prior knowledge of the coverage parameters (which are typically unavailable
in practice).
Overall, our result can be seen as the final product of a line of work
initiated by Gabbianelli et al. (2024), refined by Hong and Tewari (2024), and
finally perfected in the present submission.
Arguably,
these previous works were all suboptimal in one way or another, whereas our result
finally demonstrates the power of this approach by achieving state-of-the art results
across all dimensions of sample complexity, computational complexity, and
weakness of coverage conditions. Besides these quantitative improvements, we believe
that our analysis is much simpler and more transparent than the analyses in all of
these previous works, and as such future researchers will have an easier time building
further developments on top of it. We believe that these are strong enough contributions
to warrant publication.
**Q3.** As it is very common in the related literature on RL theory (incl.~just
about all of the papers we cite), our paper focuses on theoretical performance
guarantees. Our theoretical guarantees are indeed limited to linear MDPs, but
nevertheless our algorithm remains well-defined for MDPs that are not linear,
and we find it plausible that future work will be able to show guarantees for it
under weaker assumptions. We nevertheless feel that the results in this paper are
interesting enough as they are, and such challenging extensions have to be left
for future work.
Regarding your last statement, we are unsure what is meant by the phrase
``the approach is still approximate (given the bound)''. Can you please clarify?
**Questions**
**Q1.** The bound in Theorem 3.1 is good because it scales appropriately with the
coverage parameter $\|\mathbf{\lambda}^{\*}\|\_{\Lambda_{n}^{-1}}$ which is small under the
weakest known coverage requirement -- that $\Lambda_{n}$ sufficiently covers a
single direction in the feature space. We refer to Appendix E. in Gabbianelli
et al. (2024) for a detailed comparison of coverage conditions and an explanation
as to why all other coverage conditions are more restrictive than the one we
consider. All previous works that required such a weak assumption on data coverage
are either achieved by impractical algorithms that are limited to finite-horizon problems (e.g., Zanette et al., 2021) or
achieved suboptimal dependence on the precision level $\varepsilon$ (e.g.,
Gabbianelli et al., 2024). This makes our result the strongest known one in this
setting, which is arguably significant given how well-studied our problem is.
**Q2.** Regarding the first part of your question, see our response above.
Regarding the second question: Appendix E.~of Gabbianelli et al. (2024) also provides
some answers to this. We believe that this notion of coverage is only applicable
in linear MDPs: for such MDPs, the feature occupancy associated with a state-action
distribution fully determines the next-state distribution. This does not necessarily
apply to other models of linear function approximation, although some hope may be
given by the recent results of Weisz et al. (2023) who show that $q^\pi$-realizable
MDPs can in fact be modeled as linear MDPs. It remains to be seen if our results remain
applicable in such more general settings. | null | null | Rebuttal 1:
Rebuttal: We thank the reviewers for the detailed feedback on our work, particularly regarding the linear MDP assumption, possible application to the stochastic shortest path problem, and comparison with Zanette et al (2021). Please see our response to your individual comments and questions below. | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
ActSort: An active-learning accelerated cell sorting algorithm for large-scale calcium imaging datasets | Accept (poster) | Summary: The authors introduce an active learning framework for cell sorting in two photon imaging analysis. They develop a software interface to it and conduct a large scale benchmark using multiple datasets and involving multiple domain experts. They show that their method can reduce the needed manual human input to a small fraction of what other algorithms require. In contrast to many other works, they use hand-engineered features and simple classification algorithms.
Strengths: * The idea to use active learning for cell sorting in large scale two photon experiments is interesting.
* The interpolation between confidence and discrimination based sample selection is a small but clever addition to the literature.
Weaknesses: * The algorithm is part of a large framework, for which many components have been described in other papers (e.g. the cell extraction framework EXTRACT). This, and the reference to some of the data papers, make the paper poorly anonymized.
* There is extensive use of supplementary figures and the appendix, making the paper quite hard to read.
* The figures are super dense and very hard to comprehend in detail. The line width in several figures is very thing, lines are overlapping and not all figures are properly annotated.
* The paper spends too much space on the software and the overall framework, but has many very dense figures that are quite hard to take apart.
* It remains somewhat unclear which parts of the software and the overall framework were constructed for this paper and which were already present before.
* A comparison to a deep learning based cell classifier seems missing.
Technical Quality: 3
Clarity: 2
Questions for Authors: * Line 113f: The authors discuss features using the number of spikes. How are the spikes counted from the imaging data? This seems surprising.
* Fig 2: Could the authors clarify what is shown here? This is d-prime between which distributions? How were the positive and negative examples defined here?
* Fig 2: What is meant by traditional features?
* Appendix B.1.2: Could the authors give mathematically precise feature definitions where possible?
* Line 140: The authors may want to define their acronym DCAL.
* Fig. 3: In A, what are the different colors?
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: Adequately addressed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your time and effort in reviewing our manuscript. Your feedback was extremely helpful for increasing the clarity of our manuscript, and most importantly in distinguishing ourselves from published material.
**Summary** Respectfully, we do not agree with your summary of our work in several key domains: First, we are not designing software for two photon imaging analysis, our dataset consists mainly of 1p movies (4 out of 5 movies). Second, we are not aware of any published work with deep classifiers that is used to sort cells in 1p movies. Next, there is no work prior to us designing an active learning routine for cell sorting, so we are not sure which many other works are referred to here. Finally, we believe we have several other contributions left out in your summary, please kindly see our general response above.
**Anonymity:** ActSort is a standalone quality control pipeline, not part of any previous publication. It is compatible with not only EXTRACT, but also all other cell extraction algorithms. Moreover, ActSort is not affiliated with EXTRACT and does not endorse it. We simply used EXTRACT, since other state of the art alternatives to process 1p movies (CAIMAN, or earlier studies used ICA), were less efficient and time consuming. To mitigate this confusion, we added a new experiment on using ActSort on ICA extracted cells as shown in Fig N7 in the PDF. We are happy to address our contributions accordingly if the reviewer can let us know which specific previously published paper you are referring to.
We were carefully following the guidelines to maintain anonymity. For instance, we had not shared the user manual or links to our lecture videos for this very reason. In the references, we did our best to cover a broad community, including multiple calcium imaging techniques and cell extraction algorithms from multiple groups in the field, using published datasets with permissions from authors. Fortunately, your assumption is simply not correct.
**Comments on figures and use of appendices** Respectfully, we disagree with the reviewer that the extensive additional benchmarking and the detailed explanation of the algorithm in the appendices and the figures are a weakness. This is, in our opinion, a strength of our work. Moreover, as evident from other reviewer reports, our work is self-contained in the main text, and only uses appendices to refer to methodological details and additional benchmarking studies to strengthen our claims. If you can let us know which figures you find confusing, we would love to work on them.
**Regarding discussion of software and framework** The software, the benchmark, and the active learning framework together form our novel conceptual contributions in addition to the specific active learning query algorithms. All three of them are equally spaced in the main text. We are the first to introduce this framework to experimental neuroscience, so it is reasonable that we spend time explaining them carefully, as a big fraction of our readership will be divided between active learning researchers and experimental neuroscientists.
**Deep classifiers** Amazing point! We added a new experiment to address this (Fig. N8). Please refer to the “New experiments” section in the general response. In short, we tried ResNet-50 for feature extraction directly from movie frames and cell extraction outputs. Not only were the extracted features suboptimal to engineered features, the time to run cell data over the network was beyond our design.
**Line 113f: spikes** Good question! We use the peakseek built-in function in MATLAB to obtain the spikes count from the extracted cell traces by the cell extraction algorithm (ICA or EXTRACT).
**Clarification on Fig. 2.** Great point! We apologize for the confusion here. The d-prime is between the distribution of $p(X|c = \text{cell}) $and $p(X|c = \text{not cell})$ where $X$ represents the target feature. The positive examples represent the candidate being an actual cell and vice versa. To clarify, we added an illustration to explain the definition of absolute discriminability index, a metric for quantifying the distance between these two probability distributions (Fig. N1). Additionally, as requested, we also added a statistical quantification on the effect size of the traditional features and the new features (Fig. N2).
**Fig 2: Traditional features:** Great question, one we should have made clear! The traditional features represent the features that CAIMAN and CLEAN used for cell classification on 1p imaging movies. Please see our general response to reviewers.
**Detailed math for appendix and typos** Thank you for pointing these out. We have updated the math for all features accordingly. We changed L140 to “In this work, we designed an active learning query algorithm, Discriminative Confidence based Active Learning query algorithm, in short DCAL, and compared its performance with traditional random selection and the two other query strategies. ” The Fig 3A legend is placed under Figure B-D. We moved the legend above to make it more accessible for the reader as you suggested.
**Final clarifications** As we are concluding our response, we wish to address the weaknesses 1 and 5 from your report directly. We want to make it super clear that we have NOT copied any part of ActSort from any published work, nor did any part of ActSort was published before anywhere. We apologize if there was any confusion, and hope the rebuttal, the general response, all additional experiments we shown in the PDF, and modifications, make it clear that ActSort is NOT part of a large framework previously described in other papers.
We are looking forward to your additional comments if you have any on how we can improve clarity. Now that we have addressed all your raised weaknesses and corrected the misconceptions about our contributions - for which we thank you -, we hope you will consider increasing your score.
---
Rebuttal Comment 1.1:
Comment: I apologize for misrepresenting of the data modality the authors focus on (1p vs. 2p Ca2+ imaging). I also better understand that ActSort is a standalone piece of work.
- DNN: Sure, if you run a really large network on a problem that can be solved with a linear algorithm in ~75 features, you may be slower. I do think a custom-built CNN with a few layers could likely match the speed and performance. Btw, the "amazing point" in your response makes me wonder whether you are being facetious or not.
- Spikes: I would ask the authors to remove their mention of spikes from the paper in this context. Ample work performing simulatenous patch-Ca2+ imaging recordings has shown that the relationship between the traces and actual action potentials is much more complex than finding peaks. Call them "events" or "peaks".
- Is there a way to see the definitions for the mathematically preceise feature definitions? Like, what are bad spikes, a term which occurs in a number of features?
---
Rebuttal 2:
Comment: Thank you for your response and careful reading of our rebuttal. We answer your questions below.
**Point 1:** First, the field has had no real success in generalization across modalities in this direction before. Please see discussions above regarding CAIMAN’s deep classifiers being suboptimal for 2p movies and not recommended for 1p movies, and Cascade requiring a decade of public benchmark accumilation to achieve this on spike extraction. Also, please note that the imaging and experimental conditions between two 1p movies may be as diverse as those between a 1p and 2p movie.
Second, after consulting with the experimental neuroscientists, we realized that this path was also inconsistent with a major concern of theirs: Reproducibility. A deep network is a non-convex approach, and often requires retraining with new data (See Cascade here). Unless experimental groups also publish their retrained networks, the annotation would not be reproducible. This is not true for the model in our work. Here, as long as the group of annotated neurons are provided, a third party can always validate the reproducibility of the annotation.
Finally, how the input to such a deep classifier should look like is an open question, as we discussed in details above. CAIMAN, for example, only used extracted cell profiles, but we believe (and our Fig. 2 shows) that spatiotemporal information should somehow be incorporated. We already have several other contributions, so we do not wish to open this direction as well. So instead, we did the next best thing to address it, which is considering whether a simple pre-trained network can become a solution.
Overall, building a specialized deep network is out of scope. We believe we properly discussed this now in the paper and above.
**Point 2:** Agreed. We will do so, and we also think this is more appropriate. This was an oversight.
**Point 3:** ‘Bad’ events are the ones that had low calcium amplitudes. Mathematically, we computed the 90th trace quantile and divided it by half. Any "event" that has lower amplitude than this was considered a "bad" event. We now realize that ‘bad’ is not a correct term for this, instead we should call them "weak" events. For the rest of the features, we can share more details for whichever feature you desire to know about.
**To address your comment about our tone**
In our response, we made it clear when we disagreed with you and also when we agreed with you. Part of it was also to give you feedback about which comments we felt were quite good so that we can all come out as better writers and reviewers. We apologize that our writing was not clear. We were serious in our remarks, there was no humor intended.
We do believe that you have provided solid points for improvement, and that particular point was also necessary to at least discuss in this paper, given that our work is being considered for NeurIPS. Sure, you recommended rejection due to some assumptions about our work, but with the current reviewer loads (which we also have) and the scaling of the conference, it makes sense that certain nuances may slip through. This is why the discussion period exists. We do not believe there is any reason to be facetious about this process, and we had not intended to do so either. We apologize once again and look forward to hearing from you.
---
Rebuttal Comment 2.1:
Comment: Thanks for the additional explanations. I do see the merit in the approach and don't oppose acceptance, therefore I have raised my score to 5. I am still somewhat critical regarding some of the points discussed, also with other reviewers.
---
Reply to Comment 2.1.1:
Comment: We appreciate your time and consideration. Thank you for the helpful comments | Summary: This paper introduces a new semi-supervised active learning algorithm, ActSort, designed to accelerate cell sorting in large-scale calcium imaging datasets. The method leverages domain expert feature engineering and a novel active learning framework, optimizing the cell sorting process with minimal human input. The paper also presents a user-friendly custom software and validates its performance through a large-scale benchmark study involving six domain experts and approximately 160,000 candidate cells. Empirical results indicate that semi-automation reduces the need for human annotation to only 1%-5% of the candidate cells, while also improving sorting accuracy by mitigating annotation bias. As a robust tool validated under various experimental conditions and applicable across different animal subjects, ActSort addresses the primary bottleneck in processing large-scale calcium imaging videos, paving the way for fully automated preprocessing of neural imaging datasets in modern systems neuroscience research.
Strengths: - Developed a new active learning-accelerated cell sorting algorithm for large-scale calcium imaging datasets, significantly reducing the workload of human annotation.
- Utilized domain expert knowledge for feature engineering, enhancing classification robustness and accuracy across different animals and experimental conditions.
- Created user-friendly custom software, allowing experimental scientists without programming backgrounds to use it easily.
- Constructed a large-scale cell sorting benchmark dataset involving annotations by six experts, five mice, and approximately 160,000 candidate cells, which can be used for algorithm development and testing.
- Demonstrated the method's effectiveness through empirical studies on multiple datasets, reducing the need for human annotation to 1-5%.
Weaknesses: I believe this paper is a solid piece of scientific research, suitable for publication in a Nature sub-journal or a top-tier neuroscience journal (with the addition of more statistical analysis and biological significance studies). However, as a NeurIPS reviewer, I need to focus more on the algorithmic innovation and fair comparisons to help you improve the paper. I see the following shortcomings:
1. **Lack of Innovation in Active Learning Strategy**: While the authors propose a new query strategy, DCAL, it essentially combines existing uncertainty and diversity sampling strategies through simple weighting, without sufficient theoretical justification and discussion on the necessity and effectiveness of this combination. Additionally, there is a lack of systematic analysis and guidance on adjusting the weights, and the ablation study in the experimental section is insufficient.
2. **Lack of Experimental Comparison with Existing Semi-Automated Cell Sorting Methods**: Although the authors mention some prior semi-automated methods in the Related Work section, they do not conduct any quantitative comparative analysis in the experiments. This makes it difficult to assess the performance and efficiency advantages of ActSort over existing methods, rendering the "state-of-the-art" claim less convincing.
3. **Insufficient Feature Representation Learning**: The authors rely heavily on handcrafted features designed by domain experts, lacking data-driven automatic feature learning capabilities. Good performance on the given dataset does not guarantee generalization to new datasets and experimental paradigms (e.g., new calcium indicators, neuron types, brain regions). The authors should consider utilizing pre-trained models such as CNNs to automatically learn visual and dynamic features of cells, reducing the burden of feature engineering.
4. **Underestimation of Annotation Noise and Error**: The accuracy of manual annotations is the foundation of training and evaluation, but the paper does not sufficiently address this. Relying solely on voting to determine the "ground truth" overlooks the statistical properties of annotation errors. Additionally, the active learning process does not consider querying the same sample multiple times to reduce annotation noise, potentially overestimating the actual performance of the current method.
5. **Unreasonable Pooling Method for Human Performance Comparison**: The paper compares the classifier's output with the annotations of a single annotator on the entire dataset in each iteration, while the classifier only uses a small subset of samples. A more reasonable approach would be to compare the classifier and human performance on the currently annotated subset, which might reveal a greater advantage for humans.
6. **Insufficient Reporting of Experimental Setup Details**: The paper lacks descriptions of many critical implementation details, such as hyperparameter selection, training loss convergence, network architecture design, etc. In particular, the technical details of how general features pre-trained on ImageNet are transferred to the cell classification task are not reported, affecting the reproducibility of the results.
7. **Insufficient Scalability and Robustness Experiments**: Although the authors emphasize ActSort's ability to handle large-scale datasets, the "large-scale" in the experiments is only 150GB, which is still far from the current terabyte-scale neural datasets. Moreover, there is no sensitivity analysis regarding different imaging parameters such as resolution, signal-to-noise ratio, and frame rate.
8. **Lack of Theoretical Analysis on Active Learning Batch Size and Convergence**: The authors simply set the batch size to 1 without discussing its relationship with convergence speed and generalization performance. Particularly, the setting of the hyperparameter \( m \) lacks theoretical basis and sensitivity analysis. The convergence of active learning sampling is challenging both theoretically and practically, but the paper lacks necessary analysis and discussion on this.
9. **Unknown Applicability to Other Types of Neural Activity Data**: The authors only tested on calcium imaging data, but neural electrophysiological data, such as in vitro patch clamp and in vivo multi-channel recordings, differ significantly from calcium imaging in terms of morphology and spatiotemporal resolution. Can ActSort be applied to these data types? Would feature redesign be necessary? These questions need validation.
10. **Lack of Analysis on Annotator Variability**: The differences in knowledge background and experience among annotators can introduce annotation biases, affecting the performance of the trained classifier. The authors use annotations from multiple experts in the experiments but do not analyze the variability among experts and its impact. This variability itself is an important research issue.
Technical Quality: 2
Clarity: 3
Questions for Authors: Regarding the algorithmic issues mentioned above, I have some technical questions for your reference:
1. Can you provide a detailed statistical analysis of these 76 handcrafted features? How is the discriminative power of these features objectively evaluated?
2. What is the label distribution of the 160,000 candidate cells annotated by domain experts? How do you address the class imbalance problem?
3. How were the hyperparameters for the comparative methods (Random, CAL, DAL, etc.) chosen? Were they selected in a way that might favor ActSort?
4. In Figure 4D, why does ActSort outperform human performance with just 1% of the data? Does this imply poor quality in human annotations?
5. Besides logistic regression, what other classifiers were attempted? How did they perform?
6. What are the specific details of the pre-training and transfer learning experiments? How significant are the generalization performance differences across different datasets and annotators?
7. How was the batch size in active learning chosen? Have you considered querying the same sample multiple times to mitigate annotation noise?
8. How is the ground truth defined in the experiments? How are samples with significant disagreement among experts handled?
Confidence: 3
Soundness: 2
Presentation: 3
Contribution: 3
Limitations: Overall, this paper represents a substantial amount of work and provides executable software and code, which is of significant importance to the neuroscience field. My concerns are detailed in the shortcomings and issues section. Here, I will focus more on the scope of the paper.
If I were reviewing for Nature Methods, I would choose to accept this paper. However, I am uncertain whether this paper will attract widespread interest from the NeurIPS community. Therefore, I am giving a marginal score and would like to see the discussion from other reviewers before providing my final score.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your detailed report and excellent summary! We truly appreciate your vote of confidence in our work and the excellent suggestions for improving our technical contributions. Thanks to you, our paper now includes several diverse, convincing control experiments. To save space, we refer to weaknesses with “W” and questions with “Q”, please excuse us.
**W1 and W2** Please see our general response regarding the fit to venue and comparison to prior work. To address further concern, we performed two additional control studies. First, an ablation study by removing the adaptive estimation process showed that DCAL was significantly less effective without it (no space in PDF, happy to elaborate). The second study, shown in Fig. N5, demonstrates the evolution of weights over time. Also see Fig. N3 for additional evidence supporting the necessity of DCAL.
**W3** Great idea! We added a new experiment to address this (Fig. N8). Please refer to the “New experiments” section in the general response. In short, we tried ResNet-50 for feature extraction from movie frames and cell extraction outputs, resulted in suboptimal features.
**W4, Q7, and Q8** Excellent point! We evaluated manual annotation accuracy in Fig. S4, Tables S1, S2, and S3. We used four annotators per dataset, with majority vote as ground truth, to mitigate inconsistencies and annotation noise (See Fig. S3 for the evaluation process). Human annotators could revisit samples multiple times, while ActSort trains on the final annotations.
To address annotator reliability, we performed intraclass correlation analysis. Individual annotators were inconsistent (ICC = 0.56±0.06), but classifiers trained on these annotations mitigate human bias and were more consistent (ICC = 0.64±0.07). Majority votes across annotations were quite consistent (ICC = 0.79±0.05). This shows individual annotators are unreliable, but majority votes (In experiments, only one annotator rates!) approximate ground truths well.
**W5** We compare the classifier’s prediction with single annotators across the entire dataset, NOT just a small subset. Also, as evident from the limit of 100% annotation, the classifiers do outperform humans on trained samples.
**W6** We used logistic regression for the cell classifier to ensure real-time prediction speed, which converges to global convex optimum. The regularization effect is analyzed in Fig. S7, and a new hyperparameters analysis on the classifier threshold is in Fig. N4. AL convergence is depicted in Figs. 4, 5, S6, S8, S9, S10, and Tables S5, S6, S7. We now include a control for using ImageNet for pre-training general features (Fig. N8), whic the engineered features demonstrated significantly higher AUC than using deep learning.
**W7** Good point! The process is dependent on cell numbers, not movie size. We added: “...(data compression) resulting in approximately 270+-90 MB/1,000 cells (mean +- std over 5 mice) data sizes.” A TB-scale movie with 10,000 cells (for example, the movie from Ebrahimi et al., 2022) can be compressed to less than 3GB. The five imaging datasets had different imaging conditions (1p-2p), apparatus, and frame rates (20, 30, and 50), demonstrating feature robustness, which is illustrated by the fact that ActSort works on both 2p and 1p movies without modification.
**W8 and W9** These are outside our scope. We use linear classifiers with instant training, making batch size of one (a desirable property to mimic real-work cell sorting) attainable. Other data types have not achieved the recently attained 1 million neurons mark in calcium imaging, thus not of interest here.
**W10** This key conclusion motivates ActSort. Human annotators are inconsistent, especially with large datasets. Thus, 4 annotations per dataset is a key contribution of our benchmark and evaluation. For instance, historical benchmarks like Neurofinder (2p, 100s of neurons, 1 annotator) later showed experts missed many cells in Suite2p.
**Q1** We added an illustration explaining the absolute discriminability index (Fig. N1). We also added statistical quantification of traditional and new feature effect sizes (Fig. N2). We also updated the math for all features accordingly.
**Q2** The label distribution of the 160,000 cell candidates is shown in Figs. S6, S8, and S10. We tested our algorithm on imbalanced datasets with more true positives (Figs. S6 and S10) and artificially inflated datasets with many false positives (Fig. S8). Our results are consistent, demonstrating our approach’s robustness to imbalance.
**Q3** The Random query algorithm has no hyperparameters. CAL, DAL, and DCAL use the same cell (and/or label) classifiers to predict cell probability. We included a sweep over regularization parameters (Fig. S7) and a new experiment on classifier threshold sweeping (Fig. N4).
**Q4** This indicates 1) humans are inconsistent over time, 2) humans get tired after sorting 10k cells, and 3) the active learning algorithm selects representative samples. Points 1 and 2 are supported by new intraclass correlation results, and point 3 is illustrated in Fig. 3. In short, linear classifiers primarily care about boundary samples; once those are found, classification is mostly optimal.
**Q5** Other algorithms would be harder to judge (e.g., random forest, neural network) and would not allow instant training, which is needed for real-time human annotation. Convexity of LR ensures reproducibility, unlike random forest classifiers or neural networks.
**Q6** We apologize for the confusion. We should not have called this pretraining and will update the text as “prelabeling”. Details of the fine-tuning process are in Algorithm S1 and Appendix C.2.
Thank you for your detailed review. We implemented your feedback whenever applicable, which improved our manuscript tremendously! If you have additional concerns, we look forward to discussing them further. If your concerns are addressed, would you consider increasing your scores?
---
Rebuttal Comment 1.1:
Comment: Thank you for providing such a detailed response to my review. Your reply has addressed most of my questions and concerns, and I'm impressed with the thoroughness of your response.
First, I want to emphasize that I've always highly appreciated the quality and potential impact of this work. My main concern previously was about the fit within the scope of NeurIPS. After reading your response and considering the opinions of other reviewers, I believe this issue has been well addressed.
I'd like to highlight a few points:
1. Your additional ablation studies and control experiments effectively demonstrate the necessity and effectiveness of the DCAL method.
2. The attempt to use ResNet-50 for feature extraction and compare its performance is a good idea. Although it didn't outperform your method, this comparison is valuable.
3. Your detailed analysis of manual annotation accuracy, especially the ICC analysis, is very helpful in understanding the data quality.
4. You've clearly explained the human performance comparison issue and provided additional insights into classifier performance.
5. The extra details you've provided on experimental setup and hyperparameter choices are also beneficial.
Given the quality of your response, the additional work you've done, and my high regard for the research itself, I've decided to increase my score to 5. I believe these improvements not only enhance the technical depth and contribution of the paper but also further demonstrate its relevance and importance to the NeurIPS community.
Thank you again for your efforts and commitment to improving the paper.
---
Reply to Comment 1.1.1:
Comment: Thank you very much for your kind words, time, and consideration. We appreciate your encouraging words and please let us know if you end up having additional questions! | Summary: The paper proposes an active learning framework for improving the accuracy of cell sorting in calcium imaging datasets. The method rests upon three main components: (1) Preprocessing module which uses an already existing cell segmentation algorithm and reduces the size of the dataset using a set of engineered features. (2) Cell selection module which allows the annotator to visualize the features of specific detections and label them as cells or no cells. (3) Active learning module which trains a cell classifier (whether or not a detection is a cell or not) and a label classifier (whether or not a cell is labeled or not) and uses a discriminative confidence-based strategy to select the next cell for annotation. The authors show improved results over random and mere confidence-based strategies and demonstrate that the model surpasses human performance with a small number of annotations. A benchmark cell sorting dataset and software are also accompanied by the paper as other parts of the contribution.
Strengths: - The problem considered is significant for the neuroscience community. Improved cell sorting methods could save significant amounts of time for experimentalists and allow them to spend their time on more specialized tasks.
- The paper is written very clearly. I appreciate the clarity of the introduction and motivation of the paper. The summary of the contributions is fair and straightforward and solves a well-defined and well-motivated problem.
- The figures are informative and professional. They include the necessary information in the main text and wherever the extra information is not crucial the information is presented in the supplementary.
- The results are thorough and significant. The methods show clear advantages of different components of the contributions (features used and DCAL active learning strategy) across multiple scenarios.
Weaknesses: - The main weakness of the paper is the technical part which does not offer a novel contribution. In principle, the fact that a simple model leads to the presented improvements is a strength of the paper. But given the simplicity of the technical parts of the paper one might argue that a specialized field journal could be a better venue for such a contribution than NeurIPS.
- There are no comparisons performed against any other method and all the comparisons are against different versions of the same model. What is the state of the art for this problem and what are the competing methods? How do they compare to your proposed method?
Technical Quality: 3
Clarity: 4
Questions for Authors: - It appears that the preprocessing step selects a number of cells that are then used in the software for the human annotators to label as *cell* or *not cell*. Therefore if the preprocessing step (e.g. EXTRACT algorithm) misses some of the cells there’s no recovery. Can the authors include a discussion of this in the paper?
- In addition to this, if there are imperfections such as residual motion in the videos the preprocessing step might mark a cell as multiple cells across different frames of the video (a common issue in tracking which requires stitching *tracklets*). Is there anything in the presented framework that allows for the recovery of these mislabelings? Related to this, if there's motion in the videos, other metrics developed in the multi-object tracking community are used to assess the quality of identifying a cell and maintaining its identity throughout the video. How does your framework deal with cell identity losses? Are these issues redirected to the preprocessing step?
- This question might be a high-level yet naive question given that I haven't caught up with the latest advances in cell sorting literature. How does the method compare against, say, a fully supervised approach where you collect multiple datasets and train a large model (say a vision transformer or a UNet) for sorting the cells? Given the advances in AI and the foundation models, this seems to be a natural direction to pursue.
- How does the classifier compare with an architecture well-suited for vision that takes in a zoomed-in crop of the image (or video) of a cell and predicts if it's a cell or not? This control would be important to support the argument for engineered features and a simple classifier.
Confidence: 3
Soundness: 3
Presentation: 4
Contribution: 2
Limitations: The technical novelty of this paper is limited. The simplicity of the methods is a double-edged sword improving the readability yet making the paper a less than ideal candidate for a technical conference such as NeurIPS.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely thank you for your time and effort in reviewing our manuscript and providing insightful feedback. We have addressed your concerns regarding where ActSort stands on within the entire calcium imaging processing pipeline and explained our fit into the NeurIPS venue in the *General Response*. Note that we omit citations from text excerpts due to character limits.
**Prior work:** Great point! We should have done a better job in explaining that 1p movies are non-standard and lack specificity. As a result, there is no published baseline for the problem we are aiming to solve in the field. We are introducing the first benchmark on these types of movies. The current method for the curation process of removing false positives rely entirely on human laborers or other pipelines use simple thresholding based on what we call traditional features (CAIMAN, ICA, EXTRACT), or classifiers defined on these features (Suite2p). Please also refer to “The concerns about comparison to prior work” section in the general response for more detailed explanations.
**Imperfections in the data such as motion correction, cell extraction, stitching, cell identity** You are absolutely right about all these concerns. To recap, there are two distinct problems when processing calcium imaging movies.
The first problem is the cell extraction, in which the motion should be corrected, cells should be properly identified, cells suspected of duplication should be stitched or removed, and/or cell identity should be faithfully tracked across sections. Field has been working on these problems over two decades.
ActSort comes as a quality control (second problem) pipeline on several existing pipelines designed to address these aforementioned problems. Traditionally, quality control on these outputs was done by experimentalists manually, by going over each cell with custom softwares. However, with thousands and millions of neurons being recorded in a single session nowadays, automated quality control became necessary. Yet, we should have addressed this in the discussion for the broader ML community, which is what we do now as follows:
“The quality control process focuses on identifying the true positive samples correctly while also correctly rejecting the true negative samples that are misidentified by the cell extraction algorithm. [...] However, since ActSort is a quality control algorithm, it requires that movies are motion-corrected and cells are extracted with the experimenter's favorite cell extraction algorithm, therefore, any mistake made in the cell extraction process will propagate to the cell sorting process.” and “One limitation of ActSort is that it relies on correct cell extraction by the cell extraction algorithm used by the experimenter. Future additions to ActSort could mitigate some of the common errors, such as the ability to merge segmented or duplicated cells.”
**If the preprocessing step misses some of the cells there’s no recovery** You make an excellent point! Although EXTRACT and ActSort all fall into the calcium processing pipeline, they have different purposes. ActSort is a standalone pipeline for QUALITY control instead of cell finding. We modified the discussion section based on your suggestions as:
“ActSort is a novel standalone pipeline for quality control that can be added as a further step after using any cell extraction algorithm such as EXTRACT, Suite2p, ICA, CAIMAN, and many others (Pachitariu et al., 2016, Ren et al., 2021, Giovannucci et al., 2019, ).”
**Using foundation models** Wonderful question! Let us illustrate the shortcomings of foundation models on two examples on different, but related, problems:
Cell extraction: A foundational model, Cellpose, was developed for segmentation from images but is not used for cell extraction from videos. The main reason is that brain recordings, particularly the 1p imaging videos, are very diverse in their backgrounds, imaging and experimental conditions, and cell shapes/types/sizes. Therefore, even for cell extraction, the foundational models have not been successful to date.
Event extraction: Cascade (Rupprecht et al., 2021) utilized deep models to extract events from calcium activity traces in two-photon (but not one-photon) movies. The model was trained from scratch on a massive public dataset, and did not generalize to 1p movies due to aforementioned reasons.
Similarly, for cell sorting, a new model may need to be trained as more public benchmarks (our being the first with such scale) become available.
**Control with a deep classifier** What a great idea! To address this, we added an experiment using ResNet-50 for extracting features and classification using 1p calcium imaging snapshots. Specifically, we provided the extracted cell profiles and the average cropped movie snapshot, averaged over frames with cells’ activities, to ResNet-50 and collected 2,000 features from the final layer. We repeated our experiment in Fig. 2 for this dataset (Fig N8). The engineered features demonstrated significantly higher AUC than using deep learning.
**Technical novelty** We agree that the simplicity of the introduced method may seem not technical enough at first, but we kindly disagree. Please refer to the “Regarding fit to the venue and our contributions” section in the general response for further discussion. In short, we believe the impact of our technical contributions, no matter how simple, (now that we have also shown the suboptimality of a classifier trained on ResNet-50 features) is significant, as neither CAL or DAL can reach the success of DCAL, nor DCAL without adaptive updates reaches the same levels (new experiment, data not shown due to page issues, but happy to elaborate further).
Thank you for your helpful suggestions. Specifically, the deep net classifier was a very significant addition to our work, thanks to your feedback! If you believe we addressed your concerns, would you consider increasing your scores?
---
Rebuttal Comment 1.1:
Title: Updated review
Comment: Thank you for the detailed discussion of the points I raised. Your rebuttal was quite helpful for me to reorient myself and place this contribution in the literature on information extraction from calcium videos.
I’m glad that you found my suggestion of replacing the feature extraction with a CNN helpful, and happy that it opened up the opportunity of the new experiment. In addition, I would like to give the authors credit for their constructive engagement in the discussion with all reviewers and for taking the comments to improve their work. Well done!
I’m happy to increase my rating. The one point that I still don’t fully agree with the authors is the appropriateness of the venue. While I agree with the published literature on neural information processing, the question here is whether the technical contribution together with the impact of the work makes it a suitable candidate for publication at NeurIPS. I decided to leave the answer to this question to the AC.
---
Rebuttal 2:
Comment: Thank you very much for your kind and constructive words! Your feedback was very helpful and helped us make our work better! | Summary: In this paper, the authors develop and open-source a software, ActSort, for cell sorting of large-scale calcium imaging datasets in neuroscience which integrates domain-expert features with an active learning framework. Alongside with the software, the authors provide a new benchmarking dataset which they use to evaluate their newly developed active-learning query algorithm that forms the backbone of ActSort. The active-learning based algorithm seeks to reduce and most efficiently use human annotator's time by interleaving automated cell classification with queries for human labels of most informative outlier boundary cells. The authors provide extensive benchmarking and evaluation of their approach across different real-world experimental conditions. In doing so, they demonstrate that ActSort significantly reduces the number of human-provided cell labels necessary to achieve sufficient true positive/negative rates, and thereby constitutes an important step towards alleviating the effect of this bottleneck in processing large-scale calcium imaging datasets.
Strengths: The presented work is excellent in originality, quality and significance, with sufficient clarity.
The main strength of the paper is in its core contribution, the development of DCAL, an active-learning query algorithm that combines the advantages of confidence-based and discriminative active learning. This combination leads to the query algorithm selecting outliers (DAL) near the decision boundary (CAL) to be labeled by the human annotator.
**Originality**
The paper makes several original contributions to the field of cell sorting, the most valuable of which seems to be the 1) presentation of a novel active learning query algorithm that combines confidence-based and discriminative active learning and 2) the application of this algorithm to the problem of cell sorting in the form of 3) a GUI-based open-source software.
**Quality**
The described benchmarking datasets and the algorithm, as well as the experimental designs for empirical evaluation are of high quality, in that they are large-scale, well-motivated and well-executed, respectively.
**Significance**
This work is potentially very impactful in the field of systems neuroscience, if the claimed contributions, particularly the substantial reduction in the required number of human-labeled cells, mitigation of annotator bias, user-friendly design of software, and generalization capabilities hold true in deployment.
**Clarity**
The paper is well-motivated, clear in writing and provides extensive supplementary material. However, improvements are necessary (see Comments/Questions).
Weaknesses: The main weaknesses of the paper are a somewhat shallow discussion and lack in accessibility for the putative audience, that is, experimentalists with less experience in reading technical papers.
**Discussion**:
Instead of a discussion, authors provide a conclusion that reiterates the contributions of the paper. It would be better to instead discuss potential limitations of the presented methods; interpret some of the more surprising results (see questions); discuss how much modification would be necessary to extend the approach to multi-class data, and data collected with different types of indicators (e.g. voltage-imaging).
**Accessibility**:
What I particularly like about the paper is that all components of the proposed active-learning based query algorithms are well-motivated and interpretable. It would be valuable to make the components of the algorithm (i.e. the CAL and DAL components and the adaptive estimation of w) more accessible, by adding interpretation aids (verbal and visual) to eqs. 3 and 5.
Technical Quality: 4
Clarity: 3
Questions for Authors: **Major comments and questions**
Fig. 2: Please give more guidance in the legend for interpreting the plots (A: provide definition of absolute discriminability index); B: differences seem minimal. Did the increase in accurate rejection of false-positive candidates gained by inclusion of novel features come at the cost of increased false-negatives?
Fig. 3D: It seems surprising that feature distance does not differ substantially between the different dcal versions. Can this be explained by showing the evolution of w during the adaptive estimation process?
**Minor comments and questions**
ln. 149 following: this sentence is broken.
Fig. 3A: Provide legend indicating what the colours mean; visualize the decision boundary.
Confidence: 2
Soundness: 4
Presentation: 3
Contribution: 4
Limitations: As mentioned above, a discussion of limitations of the presented approach (perhaps in comparison to other approaches) is lacking from the discussion.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate your time and effort in reviewing our manuscript and providing invaluable feedback. You caught everything in the paper, your suggestions were extremely helpful, and your review led to new experiments that increased the readability and impact of our paper! To us, this was an extremely well written, helpful, and insightful review. Thank you!
**Accessibility for experimentalists** Thank you for highlighting this critical point while evaluating our work! We have prepared a user manual and lecture videos (with tutorials) for ActSort, but due to the anonymity requirements, we cannot share them yet. We are committed to providing a user-friendly introduction to ActSort, which will hopefully ease the experimental neuroscientists into the software. Currently, ActSort has a small user base of 20+ researchers globally (mainly collaborators), and we are actively collecting their feedback to further improve the user experience.
**Discussion section** The reviewer is absolutely correct. Thanks to the additional page provided after the revisions, we now added a new discussion section. We are happy to share the full text if requested, but for brevity, we will list the discussion bullet points below:
- ActSort is an active learning accelerated cell sorting pipeline for large-scale 1p and 2p calcium imaging datasets, which is compatible with any cell extraction algorithm (CAIMAN, Suite2p, EXTRACT, ICA, etc.).
- Since ActSort is a quality control algorithm, any mistake made in the cell extraction process (for example, missing cells and/or motion artifacts) would automatically propagate to the cell sorting process. Though we have plans to incorporate merging of duplicate cells in the future, the current version does not support this. Thus, we require that movies are motion corrected and cells are properly extracted with your favorite cell extraction algorithm.
- ActSort makes no assumption on the type of behavioral experiment, the Ca2+ indicator, the imaging conditions; and is robust to variations thanks to its standardized features.
- Though currently validated for calcium imaging experiments, ActSort can be minimally modified (perhaps with the addition and subtraction of new features) to be applied to newly emerging technologies such as voltage imaging and/or multi-class datasets (for example, including dendrites).
- Our optimally compressed data format has implications for sharing Ca2+ imaging datasets publicly. To date, only post processed activity traces were shared, but with our format, now movie snapshots can be shared too, allowing end users to check the quality of the data.
If you have additional comments, please let us know!
**Clarification for query algorithms** Great suggestion! We added an explanation to equation (3) as “The uncertainty-score $c_i^{(t)}$ represents the uncertainty regarding the sample $i$’s position relative to the decision boundary. The higher the score, the closer it is to the decision boundary. The discriminative-score $d_i^{(t)}$ represents the uncertainty regarding whether the sample $i$ faithfully represents the full dataset. Higher scores indicate the sample is unique and underrepresented by the labeled dataset.”
We added an explanation to equation (5) as “ $w$ approaches to the predefined weight $w_0$ if the labels classifier $p_\phi^{(t)}$ correctly differentiates between labeled and unlabeled data, indicating unique samples in the unlabeled dataset. $w \to 0$ if there are no underrepresented samples in the unlabeled dataset, resulting in the query algorithm selecting only the boundary cells as the CAL algorithm does.
To address your concerns about adaptive updates, we added a new figure (Fig. N5). In the beginning, when only a few samples are sorted, there is an initial drop in the weight values. This drop occurs because the label classifier fails to differentiate between labeled and unlabeled samples. This indicates that initially, the CAL component dominates the sample selection process. As the process continues, the DAL component starts to take over, selecting unique and underrepresented samples from the unlabeled dataset, and finally converging back to CAL again.
**Questions about Fig. 2:** Thank you for pointing out the confusion regarding the absolute discriminability index. We added an illustration to explain its definition in the PDF (see Fig. N1). The absolute discriminability index is a metric used to quantify the effect size between two distributions. If the two distributions can be easily told apart, we will obtain a higher d' value. The two distributions here are feature values conditioned on whether a sample is a cell or not. Additionally, we included a statistical analysis comparing the effect sizes of traditional features and new features (Fig. N2).
The illustration on Fig. 2 B makes the difference seem minimal. The AUC changes from 0.94 to 0.97. We will update the figure to have a zoom in version. We added a sentence explaining that the increase in true negative rate did not sacrifice the accuracy of true positive rate - “Re-analyzing the dataset from Fig. 2 B, we found that the new features increased the rejection accuracy from 67% to 75% (Fig. 2 B) without decreasing the accuracy of accepting true-positives (97% to 97%), leading to more effective separation between cell and not cell samples in our benchmarks (Fig. 2 A). ”
**Question about Fig 3D:** Wonderful suggestion! To address this point, we changed the feature distance measurement from euclidean distance to cosine distance, which has a much better dynamical range (Fig. N3).
**Typos:** Thanks for pointing out our typos! We will revise the writing! The Fig 3A legend is placed under Figure B-D. We moved the legend above to make it more accessible for the reader as you suggested.
Thanks to your feedback, we were able to improve the clarity of our work. We hope that you are satisfied with our responses and consider supporting our submission with a very strong acceptance!
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their detailed reply to points raised.
I would like to ask the authors to share their updated Discussion section.
Did the authors visualize the decision boundary in Fig. 3A?
While I appreciate additional figures/analyses N3 & N5, I am not convinced of their added benefit. N3: what is the reasoning behind changing the distance metric? I also don't really see a difference in the previous and new plots. N5: This should maybe be plotted on a log scale.
---
Rebuttal 2:
Comment: Dear Reviewer,
Thank you for reading our rebuttal and your continued engagement. Please find below our responses (citations omitted and certain sentences shortened):
**Discussion Section:**
"In this work, we introduced ActSort, a user-friendly pipeline consisting of 3 modules: a preprocessing module, a cell selection module, and an active learning module.
The preprocessing module efficiently reduces the data size associated with $10,000$ neurons down to few GBs, which includes not only cells' Ca$^{2+}$ activity traces, but also spatial profiles, and movie snapshots during \cm events -allowing end users to check the quality of the data-, together with the engineered features. This compact representation allows the ActSort pipeline to be run locally in laptops, despite original movie sizes of up to TBs. The joint compressing of the movie snapshots and cell extraction information offers another key contribution to the Neuroscience community with implications for sharing imaging datasets publicly.
The cell selection module features a custom design with an easy-to-use interface that displays temporal, spatial, and spatiotemporal footprints, and incorporates a closed-loop online cell classifier training system. During the annotation process, the software provides real-time feedback by displaying predicted cell probabilities and the progress made by the human annotator as well as the fraction of unlabelled cells that ActSort is confident about.
The active learning module works in the background, strategically selecting candidates for human annotations, and trains cell classifiers with annotated candidates. To make our pipeline easily accessible and used by the Neuroscience community, with this work, we are providing a user manual and video tutorials.
Previous work on quality control for two-photon movies can be used, though less accurately, on one-photon movies. One-photon datasets face challenges like low resolution, high background noise, and reduced specificity. Our solution, ActSort, addresses both brain-wide one-photon and two-photon datasets through standardized features that are robust across various behavioral experiments, \cm indicators, imaging conditions, and techniques.
ActSort can be added as a quality control step after any cell extraction algorithm, such as EXTRACT, Suite2p, ICA, CAIMAN, and others. The quality control process focuses on correctly identifying true positives and rejecting true negatives misidentified by the extraction algorithm. ActSort surpasses human performance in true positive and true negative rates by annotating less than 3\% of samples. However, since ActSort is a quality control algorithm, it relies on motion-corrected movies and accurate cell extraction, as any mistakes in extraction propagate to sorting.
To support the development of active learning algorithms in systems neuroscience, we introduce the first publicly available benchmark for cell extraction quality control on both one-photon and two-photon large-scale calcium imaging, comprising five datasets: three one-photon and two two-photon \cm imaging datasets, with approximately 40,000 cells and 160,000 annotations (each dataset annotated independently by four annotators). This dataset is unparalleled in the public domain.
One limitation of ActSort is that it relies on correct cell extraction by the cell extraction algorithm used by the experimenter. Future additions to ActSort could mitigate some of the common errors, such as the ability to merge segmented or duplicated cells. Future directions also include the exploration of oversampling techniques without hurting the true positive rate, more sophisticated cell classifier architectures, and batch sampling by the query algorithm.
Furthermore, while ActSort was validated with 1p and 2p datasets, it is a standing question whether it could be applied to newly emerging technologies such as voltage imaging and/or multi-class datasets (for example, including dendrites) with the current features, or with slight modifications such as addition and subtraction of new features, which can be explored in future work.
Another important aspect requiring further exploration is the expertise level of the annotators. Specifically, the six annotators had different levels of expertise in working with \cm imaging datasets. Hence, a potential moderation relationship can exist depending on the expertise of the annotator, which is left as a future work."
**Fig. 3A visualization:** Yes! We (approximately) visualize the region comprising of the boundary cells, as well as the boundary itself.
**Fig N3:** We find cosine similarity to be more interpretable, and it has a better dynamical range. We would love to hear your thoughts though?
**Fig. N5:** Yes, agreed. We will update the figure to be log-scale in the fraction of annotations.
Once again, thank you for your time, consideration, and support. We look forward to hearing your thoughts on our updated discussion!
---
Rebuttal Comment 2.1:
Comment: Thank you for sharing the discussion. I still would like to see more discussion of what would be necessary to extend ActSort to other data modalities (and maybe less repetition of the strengths of ActSort) and more extensive comparison to existing work (the authors did this in the rebuttal).
Overall, I support acceptance of this piece of work.
---
Rebuttal 3:
Comment: Thank you very much for your support of our work and additional feedback. Indeed, we will go over our rebuttal one more time before we finalize this work and make ALL the changes we committed. We think all suggestions were helpful and necessary. We will share the changes we performed to address your remaining concerns, but we do not expect a response from the reviewer in case they feel content with the rebuttal.
We took out the first three paragraphs from the discussion, not repeating the strengths of ActSort. For the questions you raised, please find the relevant full (unshortened) paragraphs below:
**Existing work**
"ActSort comes as a standalone quality control software, which can be used to probe the outputs of \emph{cell extraction pipelines} such as EXTRACT \cite{inan2021fast}, Suite2p \cite{pachitariu2016suite2p}, ICA \cite{mukamel2009automated}, CAIMAN \cite{giovannucci2019caiman}, and others \cite{zhou2018efficient,cnmf,chen2023hardware,roi}. These cell extraction algorithms take the raw \cm movie as their inputs, correct the brain motion, perform spatial and temporal transformations to standardize the \cm imaging movies, identify putative cells' spatial profiles and temporal activities, and often perform primitive quality controls to output the final set of neural activities.
Historically, additional quality controls on the cell extraction outputs would be performed with manual annotation, which was feasible for the small neural recordings with hundreds of neurons \cite{marzahl2020fast, salvi2019automated,amidei2020identifying, wang2021annotation,schaekermann2019understanding,corder2019amygdalar}. Yet, with the advent of large scale \cm imaging techniques, now recording up to one million cells \cite{manley2024simultaneous}, manual review became unrealistic. Instead, the field of experimental neuroscience direly needs automated quality control mechanisms that would correctly identify the true cell candidates while rejecting true negatives misidentified by the extraction algorithms.
As discussed above, ActSort is the first scalable and generalizable solution in this direction. Yet, previous works (as parts of existing cell extraction pipelines \cite{pachitariu2016suite2p,giovannucci2019caiman,inan2021fast}) had tackled this problem in specific instances: Suite2p designed cell classifiers based on some basic features to increase the precision of the algorithm \cite{pachitariu2016suite2p}, CAIMAN pre-trained a deep classifier for two-photon movies \cite{giovannucci2019caiman} (though not applicable to 1p \cm imaging movies \cite{caiman_demo_pipeline_cnmfE}), whereas EXTRACT performed thresholding on a set of quality metrics \cite{inan2021fast}. Notably, these existing automated methods with pre-trained cell classifiers often found success only for high quality 2p \cm imaging movies \cite{pachitariu2016suite2p,giovannucci2019caiman}, and even then underperformed human annotators \cite{giovannucci2019caiman}. One-photon \cm imaging datasets, on the other hand, are quite diverse in their imaging (miniscope vs mesoscope) and experimental (head-fixed vs freely behaving) conditions and face additional challenges due to low resolution and high background noise. With ActSort, we sought a generalizable solution that does not target a specific modality or require re-training, but uses interpretable features that are robust across various behavioral experiments, \cm indicators, imaging conditions, and techniques.
To provide a baseline for existing methods in our benchmarks, we designed a feature set called "traditional features" (Fig. \ref{fig:fig2}), including features used by classifiers in these prior works (See Appendix \ref{sec:feature_engineering}). Moreover, these methods did not use active learning, instead annotated only randoms subsets. Thus, in our experiments, these prior methods (or a plausible upper bound to be more exact) are represented as the random sampling query algorithm, which uses the full feature set to allow fair comparisons to active learning approaches."
(Continued below)
---
Rebuttal 4:
Comment: **Future work:**
"Though we performed extensive analysis to highlight the efficiency and effectiveness of ActSort, our work is merely a first step for what may hopefully become a fruitful collaborative subfield comprising of experimental neuroscientists and active learning researchers. There are several future directions that future work could improve upon, which we will briefly summarize below.
Firstly, in this work, we used linear classifiers for rapid online training during cell sorting and reproducibility of annotation results. This was mainly rooted in the fact that pre-training deep networks required substantial data and standardization across various \cm imaging movies. We believe that with additional public datasets that may follow our lead, this direction can become reality (as was the case for a different, yet relevant, problem of spike extraction \cite{rupprecht2021database}). Our results in this work have set a strong baseline for such future deep-learning approaches.
One limitation of ActSort is that it comes as a quality control pipeline to existing cell extraction approaches. Therefore, any mistakes in the cell extraction step would automatically propagate to cell sorting. Yet, some of these mistakes can be mitigated or highlighted post hoc. For instance, future ActSort versions could involve options for merging segmented or duplicated cells, or identifying motion in activity traces and thereby notifying the user to improve their preprocessing steps.
Another important aspect requiring further exploration is the expertise level of the annotators. To date, each \cm imaging movie is often annotated by a single annotator, who unfortunately can tire after long hours and become inconsistent as we have discussed above. This was the reason behind our choice to have the movies in our benchmark be annotated by multiple researchers. Yet, the human annotators had different levels of expertise in working with \cm imaging datasets. Hence, a potential moderation relationship can exist depending on the expertise of the annotator. To fully explore this relationship requires further research with \emph{more} annotators per dataset, by having the same annotators sort the same cells in different times, and/or testing sorting effectiveness before and after a standardized sorting training.
Finally, in this work, we validated ActSort for one-photon and two-photon \cm imaging datasets and for sorting binary classes. Yet, our framework is generally applicable to other modalities, barring additional feature engineering performed by domain experts in those fields, or with a pre-trained deep network. For instance, by replacing the logistic regression with a multinomial version and approximating DAL scores with entropy instead of decoder scores, our work readily applies to multi-class datasets that may jointly include, \textit{e.g.}, dendritic and somatic activities. With correct features, the framework we introduced here should also be helpful for the newly emerging voltage imaging technologies, especially as the number of cells will inevitably increase in such movies with the technological advances."
With these additions, our work is now exactly 10 pages. We thank you very much for your time and support; and wish you a great week! | Rebuttal 1:
Rebuttal: Many thanks to all the reviewers’ comments! We appreciate your time and efforts. We have addressed all concerns either with written edits to the manuscript, by performing requested experiments (see attached PDF), and/or by citations to relevant literature. Here, we would like to address some of the common concerns raised and further clarify the contributions of our work.
**Comparisons to Prior Work** ActSort is a novel post-cell extraction quality control algorithm, not part of existing pipelines. Historically, manual annotation was feasible for small datasets, but with large-scale imaging techniques recording up to one million cells (Manley et al., 2024), automated quality control is essential, as manual review is unrealistic for such volumes. For instance, Manley et al. (2024) seems to have used thresholding based on quality metrics (not even classifiers!) rather than human annotation (https://github.com/vazirilab/MAxiMuM_processing_tools/blob/main/planarSegmentation.m).
Most algorithms like CAIMAN and Suite2p focus on 2p movies, whereas our work targets broader solutions, including 1p movies with lower resolution, higher background noise, and reduced specificity. Applying pre-trained classifiers from these algorithms to 1p movies is infeasible, as even for 2p movies, these models are suboptimal (CAIMAN, Giovannucci et al., 2019). CAIMAN advises against using CNN classifiers for 1p data (https://github.com/flatironinstitute/CaImAn/blob/main/demos/notebooks/demo_pipeline_cnmfE.ipynb). Suite2p, validated only for two-photon movies, and uses features + linear classifiers that are represented in our traditional features. Similarly, EXTRACT seems to use thresholding based on some features.
We designed a feature set called "traditional features," including features used by classifiers in these prior works, such as SNR and spike width. The CLEAN framework (https://bahanonu.github.io/ciatah/) employs predefined features to train cell classifiers, though CLEAN is unpublished and lacks public code. We included these features in our traditional set and compared them with our new features. Unlike prior methods, CLEAN did not use an active learning query algorithm, sampling cell candidates randomly. Our baseline represents prior methods as random sampling and compares them with our novel query algorithm.
**Fit to the Venue and Contributions** Over a long period, tools for “Neural Information Processing” (Prank et al., 1998; Pachitariu et al., 2013; Andilla et al., 2014; Inan et al., 2017; Giovannucci et al., 2017; Aitchison et al, 2017; Choi et al., 2020; Dinc et al., 2023) have been welcomed by the NeurIPS conference. As NeurIPS stands at the intersection of AI and neuroscience, it has a rich tradition of embracing both technical and conceptual novelties, as well as new computational tools for neuroscientists. We believe our work upholds this tradition by offering three main contributions:
- *First Active Learning Benchmark for Cell Sorting:* We introduce the first public benchmarks for cell extraction quality control on 1p and 2p imaging, with ~160,000 annotations. This dataset is unparalleled, and we kindly ask that reviewers consider its value for the active learning community as well.
- *Framing Cell Sorting as an Active Learning Problem:* We introduce and frame cell sorting as an active learning problem, providing essential tools for further studies, including features, datasets, and software.
- *Novel Query Algorithm:* We propose a novel query algorithm which, we believe, makes a significant contribution. While some reviewers found the technical novelty limited, we respectfully disagree. We wish to bring forth the argument that technical novelty should be evaluated based on its impact rather than complexity. Our algorithm, reducing human effort from 100% to 1%, is a substantial technical advancement. We also performed more control studies to provide evidence for this point (see below).
**Summary of New Experiments** We provide a summary of new figures and experiments addressing reviewers’ concerns. Additional details are in the reviewer responses and figure captions.
- Figs. N1 and N2: Added figures explaining the absolute discriminability index and summarizing effect sizes for all features.
- Fig. N3: Replotted Fig. 3D with cosine distance for better dynamical range.
- Fig. N4: Added sensitivity analysis of classifier thresholds.
- Fig. N5: Recorded DCAL weights throughout the cell sorting process, showing adaptive estimation decreased sensitivity to initial conditions.
- Figs. N6 and N7: Added experiments on cell candidates from the ICA algorithm, demonstrating the necessity of cell sorting. **This figure also showcases the necessity of cell sorting, as even discarded garbage components can encode speed (perhaps due to brain motion or other contaminations), which might lead to incorrect biological conclusions if not culled out.**
- Fig. N8: We used ResNet-50 to extract a total of 2,000 features directly from the movie and cell extraction data, which we used to classify the cell candidates. Our engineered features outperformed this approach, which was surprisingly good at rejecting cells (outperforming traditional, but not our, features). As noted above, the fact that deep learning based feature extraction was not as successful in 1p movies is in line with the conclusions of prior research (CAIMAN, Cascade etc.). **Finally, we wish to emphasize that even this simple experiment (gathering features from average movie frames with ResNet-50) took us roughly 30 minutes for 10,000 cells with an NVIDIA RTX 4090 GPU, and about 2 hours without one. This waiting time is beyond our design restrictions.**
Pdf: /pdf/0fa51eb5251367aaa7ca5969bae64343465a5e32.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
What is my quantum computer good for? Quantum capability learning with physics-aware neural networks | Accept (poster) | Summary: The paper introduces a novel quantum-physics-aware neural network (qpa-NN) architecture for quantum capability learning. The model achieves error reduction in capability prediction on both experimental and simulated data.
Strengths: 1. The qpa-NN architecture incorporates quantum physics principles, which provides a new perspective of designing models in the field of learning-based quantum capability learning.
2. The demonstrated reduction in mean absolute error over CNN-based models is commendable.
Weaknesses: 1. The reviewer's main concern is regarding the qubit scale of the dataset. However, the dataset used in the experiments, which consists of maximum 5 qubits, appears too small both from the perspective of current quantum hardware and classical simulations. The reviewers are curious to know the reason behind the inability to collect data on systems with more qubits (either experimental data or simulated data). Is it due to the high difficulty of experimental deployment, or is it because the current models struggle to train on larger datasets?
2. The evaluation method mentioned in the paper, Process Fidelity (Eq. 3), may have scalability issues. Although the authors attempted to explain in Section 3.2 how some approximations can be used to calculate Eq. 3 relatively efficiently, this description is hard to follow. For instance, can the proposed approximation method avoid exponential computational and storage complexity? Additionally, the reviewer hope that in the revised version, the authors can provide a discussion on the differences between the approximations designed in this paper and the estimation methods in [Proctor et al., 2022], or if the approximations in this paper are merely direct adaptations of the latter's estimation methods.
Technical Quality: 3
Clarity: 3
Questions for Authors: How does the proposed architecture handle scalability challenges as the number of qubits increases?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: The authors have discussed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate the reviewer’s summary of our paper and their recognition of its strengths. Your thorough review and constructive feedback have been invaluable in refining our manuscript. Incorporating your suggestions will greatly enhance its quality and impact.
In response to your feedback, we plan to include the following revisions in our final manuscript:
- A new large-scale demonstration in which we train a qpa-NN to predict the process fidelity of 100-qubit circuits executed on a simulated quantum computer experiencing low-levels of coherent and stochastic errors. We believe that this demonstration conclusively shows that the qpa-NNs scale to large-scale systems even when predicting process fidelity.
- A short discussion on how our qpa-NN approach to capability learning differs from, yet is synergistic with, the mirror circuit fidelity estimation (MCFE) protocol introduced in [Proctor et. al., 2022].
- Clearer exposition on how our qpa-NNs estimate process fidelity without exponential and computational storage complexity by only predicting the most influential terms in the first-order approximation to process fidelity (Eq. 3).
We now address, in order, the three listed weaknesses and explain how our new results/edits address each of the weaknesses.
**Weakness 1**: the lack of a large-scale demonstration (same as the response to reviewer beTU).
We agree with the reviewer that the lack of a large-scale demonstration is the primary weakness of our paper, and we hope that our new 100-qubit demonstration will satisfy the reviewer. We believe that the qpa-NNs’ strong performance on this data conclusively demonstrates the scalability of our approach.
Fig. 1 in the rebuttal material depicts the results from training a qpa-NN to predict the process fidelity of circuits run on a noisy 100-qubit quantum computer experiencing low-levels of weight-1 coherent and stochastic errors. Because of the infeasibility of performing a strong simulation of coherent errors on a 100-qubit device (fully modelling coherent errors is equivalent to universal quantum computation), we used a first-order approximation to the strong simulation, like the simulation technique used in the non-Markovian simulations in [Hothem et. al., 2023]. The qpa-NN achieves a mean absolute error of .097%. Crucially, these results demonstrate that is both possible to construct qpa-NNs for arbitrarily large system sizes, and that the qpa-NNs can achieve excellent prediction accuracy at arbitrarily large system sizes.
**Weakness 2**: unclear explanation of scalability.
In addition to including two new empirical demonstrations of our the qpa-NNs’ scalability, we plan to clarify the theoretical arguments for the scalability of qpa-NN’s in the text. There are two primary criteria for developing a scalable machine learning technique to assess the performance of a quantum computer. The first is the ability to efficiently (i.e., without exponential time or sample complexity) gather training data and the second is ensuring that the model’s size grows polynomially in the device size. The qpa-NN approach satisfies both criteria.
We can efficiently gather training data for qpa-NNs trained on experimental hardware, regardless of if we are using probability of successful trials (PST) or process fidelity. A definite-outcome circuit’s PST can be efficiently estimated to 1/sqrt(N) precision by running the circuit N times on the quantum computer, while any circuit’s process fidelity can be estimated to O(1/sqrt(N)) precision by running three ensembles of closely related circuits O(N) times using MCFE [Proctor et. al., 2022]. The only limiting factor to gathering training data on real hardware are the noise levels in current real-world, large-scale devices.
Likewise, we limit the growth of our qpa-NNs’ parameter counts to be polynomial in the number of qubits by only tracking the effect of the most important errors in a device, the local, low-weight errors. Physics intuition tells us that if a gate G acts on qubit Q, then the most probable errors induced by G will affect a local neighborhood of Q. Moreover, these errors are unlikely to affect too many qubits at once. In other words, gates induce low-weight, local errors, as determined by a device’s connectivity graph. Our qpa-NNs reflect this intuition by only trying to predict the impact of local, low-weight errors on a circuit’s fidelity and completely ignoring the contributions of the highly improbable high-weight and non-local errors.
As a result, a qpa-NN’s parameter count grows polynomially in device size. See Fig. 4 for a visualization of how a qpa-NN’s parameter count grows with device size for a fixed set of hyperparameters. For a device with a grid connectivity graph, the parameter count grows quadratically, while it grows linearly for a device with a ring connectivity graph.
**Weakness 3**: lack of comparison with mirror circuit fidelity estimation (MCFE).
We thank the reviewer for highlighting this point of confusion. It appears that we did not clearly explain how our method differs from the method of mirror circuit fidelity estimation (MCFE) proposed in [Proctor et. al., 2022]. We will clarify how our method for training qpa-NNs to predict process fidelity scales and differs from MCFE.
MCFE and qpa-NNs solve different problems. qpa-NNs aim to construct predictive models of a quantum computer’s capability, predicting the process fidelity of any new circuit from a class after training on a representative sample. MCFE, however, estimates the process fidelity of a known target circuit by running related circuits on a quantum computer. While MCFE does not provide a model, it is useful for gathering training data to build predictive models.
---
Rebuttal Comment 1.1:
Comment: Thanks for the response. Most of my questions are solved. I would happy to raise my score. | Summary: This paper introduces a neural-network-based architecture for quantum capability learning. The idea is to utilize the architecture to predict rates of errors in quantum circuits. The authors compared their qpa-NN with previous CNN-based method and elucidated their improved performance.
Strengths: 1. qpa-NN outperforms the previous CNN method in predicting circuit success probability.
Weaknesses: 1. Given that neural network-based methods for predicting circuit success probability have been previously discussed in the literature, the novelty of the approach is not sufficiently convincing.
2. The explanation of how the qpa-NNs leverage graph structures could benefit from further elaboration to enhance clarity.
3. The scope of the considered noise types is limited, and the authors have benchmarked their results exclusively on small-scale devices.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Have the authors compared their qpa-NN with the so-called “stability baseline model”, (which was mentioned to be better than CNN-based method Hothem et al. [2023c]) ?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: There does not seem to be negative social impact of this theoretical research.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We are grateful for the reviewer's summary of our paper and their recognition of its strengths, especially our superior performance compared to CNNs. Your review and constructive feedback will be instrumental in improving our manuscript, significantly enhancing its clarity and impact.
In response to your feedback, we plan to include the following revisions in our final manuscript:
- A new large-scale demonstration of a qpa-NN trained to predict the process fidelity of 100-qubit circuits run on a simulated quantum computer experiencing low-levels of coherent and stochastic errors. This demonstration conclusively shows that the qpa-NNs scale to large-scale systems even when predicting process fidelity.
- An additional paragraph outlining the novelty of our work.
- A re-written Section 3.1 that better explains how our qpa-NNs leverage graph structures to reduce their parameter counts and make accurate predictions of a circuit’s fidelity.
- Results from three new 4-qubit demonstrations using three new error models.
We now address the four listed weaknesses and explain how our new results/edits address each of the weaknesses. We also include a comparison to the stability baseline model (SBM) from Hothem et. al. [2023c].
**Weakness 1**: the lack of a large-scale demonstration (same as the response to reviewer k1xb).
We agree with the reviewer that the lack of a large-scale demonstration is the main weakness of our paper. We hope that our new 100-qubit demonstration will satisfy the reviewer. We believe that the qpa-NNs’ strong performance on this data conclusively demonstrates the scalability of our approach.
Fig. 1 in the rebuttal material depicts the results from training a qpa-NN to predict the process fidelity of circuits run on a noisy 100-qubit quantum computer experiencing low-levels of weight-1 coherent and stochastic errors. Because of the infeasibility of performing a strong simulation of coherent errors on a 100-qubit device, we used a first-order approximation to the strong simulation, like the technique used in the non-Markovian simulations in [Hothem et. al., 2023]. The qpa-NN achieves a MAE of .097%. This result shows that it is possible to construct qpa-NNs for large system sizes, and that they can achieve excellent prediction accuracy at large system sizes.
**Weakness 2**: novelty.
While we appreciate the reviewer’s concerns about our work’s novelty, we disagree with their assessment. In particular:
- Our approach is fundamentally different from past works in that it uses bespoke networks inspired by an in-depth understanding of the underlying physics of quantum computers. This innovation is akin to introducing physics-informed neural networks for solving PDEs or CNNs for image recognition, but for a more specific task.
- Moreover, we are the first to apply any kind of neural network to predicting process fidelity. Unlike PST, process fidelity is defined for any quantum gate, circuit, or channel and is the metric of choice for reporting gate and circuit performance. It is also estimated by many popular benchmarking protocols (e.g., randomized benchmarking). Our work thus addresses a widely applicable problem that is of general interest to the quantum computing community.
We will clarify the novelty of our work by adding a paragraph outlining our contributions in the next draft.
**Weakness 3**: unclear explanation of how the qpa-NNs leverage graph structures.
Our qpa-NNs leverage a graph structure to greatly reduce their size, enabling the construction of qpa-NNs for modelling many-qubit systems. Our qpa-NNs are scalable because they model a polynomial-sized set of error types, and they use a graph to represent which parts of a quantum circuit a particular error’s rate (at circuit layer
i) will plausibly depend on (which specifies the input to each that error’s associated subnetwork). This error rate dependency is typically associated with the physical layout of a quantum computer’s qubits, and so we use the graph encoding of this layout to model this dependency. We plan to revise our description of how our qpa-NNs leverage graph structures to better explain this.
**Weakness 4**: limited noise models.
We plan to include results from three new 4-qubit demonstrations to allay the reviewer’s concern that we only evaluated our qpa-NNs on a limited number of error models. Copying from our response to reviewer oCVW:
We performed three new 4-qubit demonstrations to better support our claim that the qpa-NNs improved performance is due to their ability to model coherent noise. Each new demonstration used the same setup as the original 4-qubit demonstration, but with a different error model. The error models differed in the ratio of total coherent to stochastic noise allowed in each gate’s error model. If we include the original demonstration, we now have results from error models whose ratios range from no coherent noise (maximum H error of 0 and maximum S error of .001) to purely coherent noise.
Our final manuscript will include a broad suite of realistic noise models, ranging from complicated experimental noise models to simple noise models for 100-qubit quantum computers.
**Additional discussion**: comparison to the stability baseline model (SBM).
The qpa-NNs perform favorably compared to the SBM in Hothem et al. [2023c]. Fig. 2 in the rebuttal document shows that the qpa-NN outperforms the SBM on one device, achievers peer performance on three devices, and slightly worse performance on two devices.
However, it is a category mistake to compare the SBM to the qpa-NNs because the SBM is not a predictive model of a device’s capability. The SBM is constructed by rerunning each circuit and comparing the results to those from the original run. Therefore, it quantifies how stable the device is by measuring how a circuit’s fidelity changes over time. Alternatively, it measures how well a device’s past performance predicts its future performance.
---
Rebuttal Comment 1.1:
Comment: Thank you for your detailed response. A good portion of my concerns have been addressed. | Summary: The paper presents an approach to improve the state of the art in quantum capability learning, which is the task to predict the prowess for error when running a specific quantum algorithm given a fixed quantum computer. The tackle this, the authors introduce some specializations on (graph) neural networks that fit the nature of quantum computers and their errors especially well. The new approach yields better results than purlely CNN-based methods on a specific synthetic and empirical data.
Strengths: The approach is very interesting from both a quantum and a machine learning point of view. The issue of quantum capability learning is described in geat detail and the innovation of the approach becomed clear. In the method to intertwine neural network architectures with their subject of training, especially when that subject is a quantum computer, I see great potential for further investigation.
Weaknesses: To be frank, I consider the issue of quantum capabilty learning (especially as it is motivated in this paper) a rather artificial one, given the quite small number of potentially useful quantum algorithms. And I think this leads to some rather weak arguments, mostly in the motivation of this paper. However, quantum capability learning (as the authors examine it in this paper) is still an interesting and relevant topic. I suggest cutting down on practical promises and focusing on the pure challenge of predicting a quantum computer's behavior using classical neural networks, which I consider a sufficient motivation for the study of this topic.
While the performance-based analysis has good results, I would have wished for a more in-depth take that can actually clarify the speculation posed in the introduction: "Our qpa-NNs' improved performance is likely largely due to their improved ability to model the impact of coherent errors..."
Technical Quality: 2
Clarity: 3
Questions for Authors: None.
Confidence: 4
Soundness: 2
Presentation: 3
Contribution: 3
Limitations: No comments.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We are grateful for the reviewer's summary of our paper and their recognition of its strengths. We are also pleased that the reviewer finds modelling a quantum computer's performance with neural networks to be an interesting and worthwhile problem. Your detailed review and constructive feedback will be instrumental in improving our manuscript. Implementing your suggestions will significantly enhance our paper's quality and impact.
In response to your feedback, we plan to make the following changes to our paper:
- Include new 4-qubit demonstrations on simulated quantum computers experiencing different ratios of coherent to stochastic error rates.
We now address, in order, the two listed weaknesses and explain how our new results/edits address each of the weaknesses.
**Weakness 1**: lack of motivation behind quantum capability learning.
We respectfully disagree with the reviewer’s assertion that the motivation for quantum capability learning is weak given the small number of useful quantum algorithms. We believe that quantum capability learning will be especially important in the early fault-tolerant era as devices grow beyond our abilities to classically simulate, while remaining too noisy or small to reliably implement general quantum algorithms. It is precisely in this early fault-tolerant era when we will need accurate, scalable, and fast-to-query predictive models of a quantum computer’s capability to better understand which experiments to run and which devices to build. Nonetheless, we are happy to read that the reviewer believes that the “pure challenge of predicting a quantum computer’s behavior using classical neural networks” is relevant and sufficiently motivating.
**Weakness 2**: failure to substantiate the claim that the qpa-NNs outperform the CNNs due to their improved ability to model the impact of coherent errors.
We performed three additional 4-qubit demonstrations to better support our claim that the qpa-NNs improved performance is due to their improved ability to model coherent noise. Each new demonstration used the same setup as the 4-qubit demonstration in the original draft except using a different error model. The error models differed in the ratio of total coherent to stochastic noise allowed in each gate’s error model. If we include the original demonstration, we now have results from error models whose ratios range from no coherent noise (maximum H error of 0 and maximum S error of .001) to purely coherent noise.
As shown in Fig. 3, as we increase the ratio of coherent to stochastic errors, the CNNs’ performances begin to diverge from the qpa-NNs' performances, before ultimately stabilizing at a statistically significant worse prediction accuracy. These results confirm our claim that the qpa-NNs outperform the CNNs, in part, because of their improved ability to model the effect of coherent errors on process fidelity.
---
Rebuttal Comment 1.1:
Comment: I do not agree with the points made here.
Most importantly, the fact that the qpa-NN's performance scales better with quantum noise compared to a classical approach still is not a sufficient argument that handling quantum noise is the reason for that performance.
I advise to be cautious about making too grand claims in either case. | null | null | Rebuttal 1:
Rebuttal: We sincerely thank the referees for their time and insightful comments on our manuscript. Your thorough review and constructive feedback have been invaluable in identifying areas for improvement and clarity. We are confident that incorporating your suggestions will significantly strengthen the quality and impact of our paper, making it worthy of inclusion at NeurIPS 2024.
In response to each reviewer’s feedback, we plan to modify our paper to:
- Better support our claim that our qpa-NN approach scales by including a new large-scale demonstration on a simulated 100-qubit computer.
- Better support our claim that the qpa-NNs’ improved performance is due to their improved modeling of coherent errors by adding an appendix with results from three new simulated 4-qubit devices, each experiencing different ratios of coherent to stochastic error strengths.
- Clarify how our approach differs from, yet synergizes with, mirror circuit fidelity estimation (MCFE) [Proctor et. al., 2022] by adding a few sentences to Section 2.2.
- More clearly explain how our approach avoids the exponential scaling afflicting other methods for predicting process fidelity by rewriting Section 3.2.
- Clarify how our qpa-NNs exploit the graph structure of a quantum computer’s native connectivity graph to efficiently track and model the effects of only the most relevant errors by rewriting Section 3.1.
- Better explain the novelty of our results by adding a paragraph to the introduction clearly stating this work’s novel contributions to the literature.
Additionally, we have provided a comparison between the qpa-NNs and the stability baseline model (SBM) in [Hothem et. al., 2023c] in our rebuttal. We chose not to include these results in our revised paper as the SBM is not a predictive model and its inclusion would muddle the paper’s presentation. Nonetheless, we strongly believe that our revisions comprehensively address each reviewer’s critiques and that our final product will merit inclusion in this year’s NeurIPS. Lastly, we include an attached pdf with supporting figures.
We now briefly outline how our proposed revisions address specific reviewer critiques. We provide full discussions in our responses to each individual reviewer.
- Our 100-qubit demonstration should allay reviewer beTU’s and k1xb’s concerns about analyzing only small-scale devices. As shown in Fig. 1 of the rebuttal PDF, our trained qpa-NN obtained a mean absolute error of .097% when predicting the process fidelity of 100-qubit circuits run on a simulated 100-qubit quantum computer experiencing weight-1 coherent and stochastic errors. Further details are provided in our rebuttals to each reviewer. Unfortunately, it is not possible to perform a 100-qubit demonstration on real hardware as IBM’s cloud-accessed, 127-qubit processors are currently too noisy to reliably execute circuits with high fidelity.
- New 4-qubit simulations should satisfy reviewer oCVW’s desire for a more thorough investigation of whether the qpa-NNs’ improved performance is due to their improved ability to model coherent errors, as well as reviewer beTU’s concern about the scope of the considered noise types. In our new 4-qubit demonstrations, we repeated the 4-qubit demonstration in the paper using three new error models, each with a different ratio of coherent to stochastic errors. As shown in Fig. 3, as we increase the ratio of coherent to stochastic errors, the CNNs’ performances rapidly deteriorate, while the qpa-NNs’ performances ultimately stabilize. These results confirm our claim that the qpa-NNs outperform the CNNs, in part, because of their improved ability to model the effect of coherent errors on process fidelity.
- Revisions clarifying how our approach differs from, yet synergizes with, MCFE address reviewer k1xb’s request for clarification. Our revisions more clearly explain the different goals of our qpa-NN approach—to build predictive models of a quantum computer’s capability that can predict a circuit’s fidelity without running the circuit—and MCFE—to estimate the fidelity of select circuits by running those circuits on the quantum computer. Our revisions also clarify how to use MCFE to efficiently gather training data for the qpa-NNs, a necessary step in a scalable pipeline for building predictive models.
- Revisions clarifying our approach to predicting process fidelity address reviewer k1xb’s concern that our approach may have scalability issues. In our revisions, we explain how we avoid the exponentially expensive cost of evaluating equation 3 by focusing solely on predicting the effect of the most important errors in a quantum computer—local, low-weight errors—on process fidelity. Because the number of local, low-weight errors scales polynomially with device size, our qpa-NNs’ parameter counts also scale polynomially with device size. Put another way, we design qpa-NNs to approximate eq. 3 by only predicting a polynomial-sized set of the most important terms in equation 3.
- Revisions clarifying how our qpa-NNs use a graph structure to encode the spatial dependencies of a quantum computer’s error structure, and how this enables our qpa-NNs to have a small, polynomial number of parameters.
- Revisions more clearly stating our contributions address reviewer beTU’s concern that our approach is not “sufficiently novel.” As explained in our rebuttal, we introduce a novel, sophisticated neural network architecture for solving the quantum capability learning problem and are the first group to tackle the problem of predicting process fidelity, the most widely used circuit-success metric in the quantum computing community.
Pdf: /pdf/09ddffa5519a8b2b501138a364e29a483b643cfb.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Convolutional Differentiable Logic Gate Networks | Accept (oral) | Summary: This paper proposes a novel computational architecture for differentiable logic gate networks (LGNs), a machine learning methodology that aims to learn networks of logic gates for fast, gate-efficient inference on logic gate-based hardware. Specifically, the authors propose extensions to a prior work on differentiable LGNs inspired by the computer vision literature. They propose (i) logic gate tree convolutions which are layers that convolve trees of logic gates with the input, (ii) logical or pooling (inspired by max pooling) layers that compute the disjunction of the receptive field, and (iii) residual initializations that bias the initial distribution over logic gates to be an identity gate of one of the two inputs. The authors also detail various computational techniques to optimize the efficiency of the new architecture in training, simulation and hardware.
Experimental results on computer vision tasks (CIFAR-10, MNIST, Fashion-MNIST) and extensive comparison to SOTA methods demonstrate that the proposed architecture achieves competitive (if not SOTA) accuracy while being either significantly smaller (in terms of gate count) or faster (in terms of inference speed on FPGAs or CPUs). The authors demonstrate through ablation studies that the proposed components and architectural choices are all integral to the achieved performance. Moreover, the authors provide experimental results for insightful studies on the proposed components, such as why logical or pooling doesn’t result in too much network activation, the induced stability of residual initializations, and the effects of gate distribution discretization.
Strengths: - The proposed methods build upon prior work on differentiable logic gate networks (LGN) and take inspiration from compute vision literature. The contributions are novel and advance the SOTA for fast and efficient inference of machine learning models. The results are of importance to embedded and real-time machine learning applications.
- Related work is discussed in detail, and experimental results are compared to various prior works.
- The submission is technically sound, with claims supported by experimental results (see weaknesses for point on statistical significance). The authors demonstrate through ablation studies the utility of each of the proposed architectural components, and discuss tradeoffs, strengths and weaknesses for their techniques.
- Methods are detailed enough for reproducing the proposed architecture and results.
- The authors provide substantial insight into methodology choices, making the presentation clear and informative.
Weaknesses: Lack of statistical significance measures for results. (The main prior work on differentiable LGNs (F. Petersen et al.) provides standard deviations in their appendix). However, the authors provide justification for this within the NeurIPS Paper Checklist.
Technical Quality: 4
Clarity: 4
Questions for Authors: - If we take the CIFAR-10 results as an example, another method achieved greater accuracy (91%, Hirtzlin et al.) but requires significantly more gates. What is the current limitation on scaling LogicTreeNets to larger gate counts (beyond 111M)? If, for example, a greater accuracy were desired.
- Similar to the study conducted by F. Petersen et al., is there any insight to be gleaned from the learnt distribution over logic gates in logic gate tree convolution kernels?
- Possible typos
- Lines 344-345: is a forward slash missing?: “transistor count / chip area”
- Line 351: is an M missing?: “55.8M gates”
Confidence: 4
Soundness: 4
Presentation: 4
Contribution: 4
Limitations: The authors have adequately addressed limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you very much for your extensive and positive feedback.
We greatly appreciate that you find our result of importance to embedded and real-time machine learning applications.
Thank you also for your praises wrt. coverage of related work, our technical soundness, our ablation studies, discussions of tradeoffs, detailed descriptions, and the insights into methodology choices that we provide.
**Weaknesses:**
> Lack of statistical significance measures for results. (The main prior work on differentiable LGNs (F. Petersen et al.) provides standard deviations in their appendix). However, the authors provide justification for this within the NeurIPS Paper Checklist.
Thank you for pointing this out. In the following, we present standard deviations (over 10 seeds) for our smaller models.
| **CIFAR-10** | Originally reported | With standard deviations |
|--|--|--|
| LogicTreeNet-S | 56.71% | 56.55±0.46% |
| LogicTreeNet-M | 70.65% | 70.72±0.37% |
| **MNIST** | Originally reported | With standard deviations |
|--|--|--|
| LogicTreeNet-S | 98.06% | 98.27±0.25% |
| LogicTreeNet-L | 99.11% | 99.10±0.10% |
We will extend the standard deviations to the larger models (takes very long) as well as Fashion-MNIST (we prioritized the other experiments for now) for the camera-ready.
**Questions:**
> If we take the CIFAR-10 results as an example, another method achieved greater accuracy (91%, Hirtzlin et al.) but requires significantly more gates. What is the current limitation on scaling LogicTreeNets to larger gate counts (beyond 111M)? If, for example, a greater accuracy were desired.
Our primary limitation lies in computational ressources for training, both wrt. VRAM and raw compute. In the future, we hope to train even larger and deeper models
> Similar to the study conducted by F. Petersen et al., is there any insight to be gleaned from the learnt distribution over logic gates in logic gate tree convolution kernels?
Yes, we provide a study on the learnt distribution over logic gates in logic gate tree convolution kernels in the Author Response PDF page. We also compare it to the same model but with Gaussian initializations.
It actually illustrates an important point: the majority of gates in the network are residual gates (A).
> typos
Thank you for spotting these typos.
We have fixed them, and will do another proof reading for the camera-ready.
---
Rebuttal 2:
Comment: I thank the authors for their response and appreciate their effort in addressing my concerns and questions.
With standard deviations being added to the results, and the additional study on learnt distributions over logic gates, I have updated my score of soundness from 3 to 4. | Summary: In this work the authors propose a convolutional-like architecture along with two novel mechanisms oriented to differentiable logic gate neural networks, making the training and inference of such networks possible in more intense tasks in context of logic gate neural networks. More specifically, the authors augment the current state-of-the-art capabilities of Differentiable Logic Gate Networks by introducing a convolutional architecture and training approach that is based on logic gates and along with the proposed “logical or pooling” and “residual initialization” achieving higher accuracies in various dataset with lower number of gates, reducing significantly the inference time. The authors claim that the proposed method unlocks the capabilities of differentiable logic gate networks providing a comprehensive review on works that target efficient inference, discussing the benefits of adopting the proposed architecture on application that requires efficient and low cost inference.
Strengths: The paper is well organized providing a comprehensive review on methods that target efficient inference, discussing in depth the benefits of logic gate networks. The technical details, arguments and experimental results provided in the paper regarding the realization of the method in hardware are convincing.
The authors provide some experimental results in actual hardware demonstrating the effectiveness of the proposed method in terms of efficiency.
The authors provide interesting ablation studies to support experimentally some of the designing decisions.
The motivation is solid and easy to understand. Additionally, the contribution is clear, achieving state-of-the-art performance in the context of larger logic neural networks.
Weaknesses: In many cases the work seems incremental in reference to [1], without, however, overcoming or justifying some theoretical lacks that are spotted in this previous work. More specifically, the random connections that applied in [1] are adopted in this work without well being theoretically justified or proposing an alternative way.
In addition to that, I find myself referring occasionally to [1] in order to understand some technical details. For example, the differentiable logic gates are presented only schematically in the paper.
From a technical point of view, I find it difficult to conceptualize the computational graph that is built during the training. Do the authors introduce a projection layer, parameterized by vectors z, for each channel of the kernel on the available gates? To this end, is the z vector optimized taking the partial derivative of z on the classification loss? Introducing some details on the optimization process will be useful.
The random selection of inputs on the receptive fields raises concerns regarding the consistency of the training process, with the paper not providing error bars regarding different training runs. Although the authors discuss the reasons for attaching higher probability to the logic gate choice A (or B), they do not discuss how they conclude on $z_3 = 4.905$. Such empirical decisions potentially hinders the generalization ability of the proposed method.
A proof reading is required. There are some minor typos in the paper and in the appendix. (e.x. Paper-L293 “LGNs differ from the LGNs”, missing reference in L6 of the appendix.
[1] Petersen, Felix, et al. "Deep differentiable logic gate networks." Advances in Neural Information Processing Systems 35 (2022): 2006-2018.
Technical Quality: 4
Clarity: 4
Questions for Authors: How many additional trainable parameters are introduced during training in contrast to the traditional CNNs and which of them are discarded during the inference?
How do the authors comment on the observation that the proposed method seems to not generalize equally well to smaller architectures?
How do the authors conclude on $z_3 = 4.905$?
I would like to stress the consistency of the proposed method in different training runs due to the fact that it is based on the random selection of inputs of the receptive field. Is the robustness of training preserved and what are their experimental observations?
Confidence: 3
Soundness: 4
Presentation: 4
Contribution: 4
Limitations: Some technical details in the training process are not clear.
Some empirical decisions made on paper are not well justified neither experimentally nor theoretically.
The proposed method leads to lower accuracies in smaller models (e.x. LogicTreeNet-S of the table 1)
Taking into account that they promote logic gate choice A, it would be very interesting if the authors report the per layer distribution of logic gates after the training. This could be interesting also in contrast to Gaussian initialization.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you very much for your helpful and positive feedback, and for appreciating that our "paper is well organized", provides a comprehensive review on methods that target efficient inference, our in-depth discussion of the benefits of logic gate networks.
Thank you also for appreciating the technical details regarding the realization of the method in hardware, and expressing that you find the realization convincing.
Finally, we appreciate that you find our contribution clear, achieving state-of-the-art performance in the context of larger logic neural networks.
**Weaknesses:**
> [...] random connections that applied in [1] are adopted in this work without well being theoretically justified or proposing an alternative way.
We would like to clarify that, while our connections still have some level of randomness, they are substantially more structured. In particular, the convolutional layers are binary trees, so there is a deterministic structure within the convolutional kernels. Moreover, we restrict the input connections to be from only two channels, which further improved performance.
> In addition to that, I find myself referring occasionally to [1] in order to understand some technical details. For example, the differentiable logic gates are presented only schematically in the paper.
Thank you for this remark; we will extend the discussion of differentiable logic gates in the camera-ready version, where we have an additional page.
> [...] computational graph [...] Do the authors introduce a projection layer, parameterized by vectors z, for each channel of the kernel on the available gates? To this end, is the z vector optimized taking the partial derivative of z on the classification loss? Introducing some details on the optimization process will be useful.
Yes, the vectors $z$ are optimized by taking the derivative of $z$ on the classification loss.
These vectors $z$ are the logits of the probability distributions over choices of logic gates, and can be mapped to those probabilites via softmax (Eq. 1). Accordingly, we do not use a projection layer. We will add clarifications to the camera-ready.
> The random selection of inputs on the receptive fields raises concerns regarding the consistency of the training process, with the paper not providing error bars regarding different training runs.
>
> I would like to stress the consistency of the proposed method in different training runs due to the fact that it is based on the random selection of inputs of the receptive field. Is the robustness of training preserved and what are their experimental observations?
Thank you for raising this important concern. Yes, consistency between training runs is given, especially for larger models, whereas for the smallest models the stochastic effects can be a bit larger.
In the following, we provide means and standard deviations over 10 seeds for our smaller models:
| **CIFAR-10** | Originally reported | With standard deviations |
|--|--|--|
| LogicTreeNet-S | 56.71% | 56.55±0.46% |
| LogicTreeNet-M | 70.65% | 70.72±0.37% |
| **MNIST** | Originally reported | With standard deviations |
|--|--|--|
| LogicTreeNet-S | 98.06% | 98.27±0.25% |
| LogicTreeNet-L | 99.11% | 99.10±0.10% |
| LogicTreeNet-XLD3 | (new) | 99.24±0.06% |
We will extend the standard deviations to the larger models (takes very long) as well as Fashion-MNIST (we prioritized the other experiments for now).
> Although the authors discuss the reasons for attaching higher probability to the logic gate choice A (or B), they do not discuss how they conclude on $z_3=4.905$. Such empirical decisions potentially hinders the generalization ability of the proposed method.
>
> Question: How do the authors conclude on $z_3=4.905$?
$z_3=4.905$ is the value that leads to 90% probability being assigned to the gate choice 3 ('A').
We have clarified this in the revision. Also, we provide a code sketch for an explicit computation below.
```python
>>> z = torch.zeros(16)
>>> z[3] = 4.905
>>> torch.softmax(z, dim=0)
tensor([0.0067, 0.0067, 0.0067, 0.9000, 0.0067, 0.0067, 0.0067, 0.0067,
0.0067, 0.0067, 0.0067, 0.0067, 0.0067, 0.0067, 0.0067, 0.0067])
```
Moreover, to further address your concern, *we performed an additional ablation study*, where we vary $z_3$ between 1.5 and 7.5, which we provide in the Author Response PDF page with the General Rebuttal (Figure 2).
> Typos
Thank you for pointing out the typos; we have corrected them, and will proof read everything for the camera-ready as suggested.
**Questions:**
> How many additional trainable parameters are introduced during training in contrast to the traditional CNNs and which of them are discarded during the inference?
We use 16 parameters (vector $z$) for each differentiable logic gate. After training, each gate is discretized to a single parameter, i.e., the choice of logic gate. Finally, during the simplification process, depending on the exact model, 60-80% of the logic gates are removed.
> How do the authors comment on the observation that the proposed method seems to not generalize equally well to smaller architectures?
>
> The proposed method leads to lower accuracies in smaller models (e.x. LogicTreeNet-S of the table 1)
First, we would like to state that we designed each of our models based on model size "L".
Thus, for the smallest model, in order to match the number of logic gates with the baselines, we had to drastically reduce the number of channels down to 40, which was not the optimal model for the small size, but maintained for consistency.
> Taking into account that they promote logic gate choice A, it would be very interesting if the authors report the per layer distribution of logic gates after the training. This could be interesting also in contrast to Gaussian initialization.
Thank you for this request. We have added a visualization of the per layer distribution of logic gates after the training to the Author Response PDF page.
---
Rebuttal Comment 1.1:
Comment: I would like to thank the authors for the clarifications given and I appreciate their effort in answering my comments.
The authors discuss and provide clarification to my comments including additionally experimental results. Thus, I update the score of both soundness and presentation from 3 to 4. Additionally, they address my main concern regarding the stochastic effects of random connections. To this end, I update the score for contribution from 3 to 5 and overall score accordingly. | Summary: The presented work is a significant extension to "Deep Differentiable Logic Gate Networks" previously presented at NeurIPS 2022 [7]. Additional contributions are the support for convolutions including logic gate trees / or-pooling and residual initializations. All additions together allow to train logic gate networks that are deeper, achieving SOTA accuracies and beyond while using remarkably fewer resources during inference and training as well. The authors promise to also make the code publicly available.
Strengths: Improving efficiency of (small scale) neural networks substantially. Lowest latency of all SOTA baseline results, the majority of them being much slower with even worse accuracy.
Weaknesses: There are a few issues with the clarity of the presentation. Upfront it should be mentioned that the Appendix is vital to understand many details (architectures, choice of parameters, memory usage, memory access, etc.) and should certainly be published as well. It was only supplementary material as part of the review.
Figure 4 only shows an effect, but does not explain why pre or-pooling is superior to the other 2 and the text does neither.
Lines 228-229 seem to contradict the statement made in line 115. Does the training time mentioned in line 115 mark a baseline? If yes, then please state that and put it in relation to the "substantially improved computational training efficiency" lateron.
In figure 6 (and from the associated text) it is not clear which 10 of the 18 subnetworks are trained. Is it possible to mark those? Are the blue networks the connection index tensors? A few more detailing remarks would help to understand the figure better.
Table 1 does not include a SOTA baseline using float weights. Even if that is a bit out-of-scope, it would help to put accuracies vs. number of bits (e.g. 32 (for FP32) * N parameters) in perspective.
Table 2 lists results for a Xilinx XC7Z020. If not mistaken, the upper limit for the number of configurable gates is ~1.3M. How does the execution of models M/L/G that all exceed that number by far actually work? Same for the MNIST model L in Table 3. Are any additional resources of the FPGA device being used? A breakdown would be very useful.
Table 5 contains a line for "No or pooling". It is unclear why it lists the number of total layers to be 10 and not 14. Please explain.
Appendix, Section A.3.3 CPU Inference Times: the CPU in use is not being mentioned. Is this a desktop/workstation CPU or the ARM-based CPU of the Xilinx XC7C020?
Typos:
line 441: "This means the that the..." -> "This means that the..."
Appendix, line 6: "... from Figure ??..." -> please cite the correct figure, my guess is #6.
Technical Quality: 4
Clarity: 3
Questions for Authors: Reflecting on lines 216-226: the reviewers personal view of residual connections is similar to the authors' and can be summarized as means to preserve (mutual) information throughout the network. Although residual initializations seem pivotal to training of LGNs, they could potentially also help float-based NN trainings without the need to add residual connections. If the authors share that view it would be great to add a discussion on this to the presentation.
Why is the choice of gates being limited to 2-input variations? Especially in the context of convolutions and pooling, wouldn't it make sense to also allow for N-input OR-gates with N >> 2? It would also allow for reducing depth and signal propagation delays.
Confidence: 5
Soundness: 4
Presentation: 3
Contribution: 4
Limitations: Despite improved training and inference efficiency, experimental results are limited to very small classification tasks. A single bigger task would prove scalability (or not).
Lines 416-417. The chosen model sizes (S,M,L,G) do not prove saturation of accuracy (except maybe for MNIST), but you even mention that they improve with increasing model depth. If there's a reason why you stop early, please mention it in the presentation.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 9
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you very much for providing such extensive and positive feedback, and for appreciating our substantial efficiency improvements, and achieving the lowest latency of all SOTA baseline results.
Due to the character limit, we keep our reponses short; please let us know if you would like us to elaborate.
> [...] Appendix is vital [...]
Thank you for this remark. For the camera-ready, papers have an additional page, so we can fit a few more details in the main paper and improve the clarity. Yes, we will publish the appendix as well.
> Figure 4 only shows an effect [...]
Fig. 4 indeed only shows the effect that the model automatically regularizes itself to have an activation level of around 50%. (Ideally, act. levels are ~50% to maintain high information content.) "pre or-pooling" refers to the activation level before the or-pooling operation and "post or-pooling" refers to the activation level after the or-pooling operation (both from the same model). "no or-pooling" refers to a modified architecture without or-pooling. We clarified the notation and explanation for the revision.
> Lines 228-229 [...] line 115. [...]
In line 114-115, we referred to vanilla LGNs, which we will explicitly clarify in the camera-ready.
We provide overall training speeds in the supplementary, and offer to add a direct comparisons between existing vanilla LGN, our vanilla LGN, and our convolutional LGN training speeds for the revision.
> In figure 6 [...]
We apologize for the ambiguity. The layers that are trained are the "Conv" (/"C") and the "Rand" layers; each of the "Conv" blocks in the figure contains 2 layers of logic.
Blue illustrate the input + hidden states; the index tensors are encoded within the green "Conv" and "Rand" blocks. We will explain and mark it in the revision.
> SOTA baseline using float weights.
Thanks for the suggestion, we will include SOTA models with float weights into Table 1.
> Table 2 lists results for a Xilinx XC7Z020. [...]
For CIFAR-10, as listed in the caption (Tab. 2), the times for M/L/G are based on CPU simulations.
For the MNIST L model, you are correct, based on our initially reported number of gates would not have fit on the FPGA.
While we were able to fit the model on the FPGA, at the time of writing, we could only accurately compute the number of gates for the CIFAR-10 model, and used an upper bound for the numbers of gates for MNIST (we used the total number of gates during training).
In the rebuttal PDF (Fig. 1) we illustrate that a majority of gates is actually a trivial feedforward "A", which can be optimized away. Additional simplifications are also possible, e.g., "True and B" -> "B".
As Vivado optimizes for 6-LUTs, we could not read out the number of ASIC gates.
As there were no open-source libraries for logic gate network simplification that scale to our model, we developed our own stack, which at the time of writing only supported CIFAR.
Now, it also supports MNIST, and we can report more accurate numbers of ASIC gates for MNIST:
| MNIST | # Gates (prev.) | # Gates (new) |
|--|--|--|
| LTNet-S | 296 K | 197 K |
| LTNet-L | 4.74 M | 671 K |
(The actual number of logic gates will still be lower than this number.)
> Are any additional resources of the FPGA device being used?
So far, we only utilize Logic Cells and Flip Flops to keep it as close as possible to efficient ASIC designs.
> Table 5 [...] "No or pooling". [...]
The or pooling pools 2x2 inputs, and thus requires 2 levels (layers) of 2-input logic, which can be reduced to a single level on certain hardware (see below).
> Appendix, Section A.3.3 CPU [...]
The CPU in Appendix A.3.3 is an AMD Ryzen 5 7600X (consumer desktop) CPU, and we utilize only a single thread of the CPU.
Thanks for pointing out the typos, we fixed them for the camera-ready.
**Questions:**
> residual initializations
This could indeed be a very interesting direction for future work. We will include a discussion in the revision.
> limited to 2-input
Beyond what we discussed in the paper, we actually considered, implemented, and evaluated 4-input and 6-input gates in the convolutional kernels. We observed that the 2-input tree formulation leads to more favorable learning dynamics, as well as a better trade-off between numbers of gates and accuracy, which is why we stuck with 2-input gates.
> OR-gates with N >> 2
E.g., for OR-pooling, yes, these can be implemented, e.g., with a 4-input OR gate. Which specific hardware implementations wrt. chip area, delays etc. are best depends on the particular ASIC manufacturing process. For our models, we count the 4-input OR gate as 3 gates to have a conservative estimate that applies independently of the hardware.
**Limitations:**
> Despite improved training and inference efficiency, experimental results are limited [...] scalability
Thanks for the questions. We are indeed actively working on larger classification tasks for the proposed approach and consider this an important research question.
Our current preliminary designs are internally reaching a performance of 48.06% on ImageNet (top-1).
We will continue this direction and will hopefully reach even more generalist models in the future.
> [...] saturation of accuracy [...] improve with increasing model depth. If there's a reason why you stop early, please mention it in the presentation.
The reason for us to stop rather early was computational training cost.
Networks with d=3 are even more expensive to train (~2x compared to d=2).
We had let it continue to train the d=3,3,3,3 model after submission and it reached 85.46% (vs. 85.22% in Tab.5.)
Notably, the deeper models are barely more expensive in inference because the deeper models end up with more residual gates.
Inspired by your comment, we extended the MNIST model to a larger and deeper model with d=3, and now achieve 99.24%, which improves the accuracy over all baselines:
| MNIST | Acc. | # Gates |
|--|--|--|
| LTNet-XLD3 | 99.24±0.06% | 1.82 M |
---
Rebuttal Comment 1.1:
Comment: I would like to thank the authors for the clarifications given and I appreciate their effort in answering my comments and especially conducting (and possibly including) even more experiments.
> [...] Yes, we will publish the appendix as well.
Squeezing the Appendix into tiny space may make it less useful. In that case, feel free to consider the idea to publish a full, detailed version of the Appendix together with the source code repository and refer to it in the paper and/or short version of the Appendix.
> [...] In the rebuttal PDF (Fig. 1) [...], we developed our own stack, [...]
I'm using OpenReview for the first time, so please forgive me if I'm wrong here. I don't see a rebuttal PDF, only a PDF that seems to be the original version. Is there a (not that obvious button) to show the updated version? I do see a rebuttal PDF extending the Appendix and ablation study (alone).
For the second part of my citation, it would be great to describe the gate-level optimizations developed and applied and also refer to the publication of source code of it if you plan to share those details as well.
---
Rebuttal 2:
Comment: Thank you for responding to our rebuttal, and for asking for the clarifications.
For the final publication, we will publish the full appendix along with the paper, and also include it with the source code.
The rebuttal PDF can be found in the general rebuttal titled “Author Rebuttal by Authors” at the top of this page. It is a single PDF page with 2 figures, which we will include in the final paper / appendix.
For the developed gate-level optimizations, we will include the details in the final appendix. | Summary: This paper introduced a convolutional logic gate network (LGN), which works effectively on high-dimensional spatial images. Inspired by LGN and convolutional neural nets (CNNs), the authors proposed (1) Logic Gate Tree as convolutional kernels (2) Logical OR as the pooling layer, and (3) residual initialization (instead of Gaussian random init). Besides, the authors also developed an engineering strategy to speed up the training, using low-level CUDA kernels, which is well admired. The authors have shown impressive results, in terms of performance and efficiency, on CIFAR10 and MINIST datasets.
Strengths: - Several novel ideas (technical contributions) exist in this paper. I especially admire the idea of primarily using a feedforward logic gate during initialization, which prevents both loss of information and vanishing gradients. The motivation and intuition are very clear in L208-215.
- The authors have demonstrated their design using insightful experiments. For example, when introducing logic OR as pooling, the authors discussed that training can implicitly prevent saturation of activations using experiments, which is very interesting.
- The experimental performance is impressive.
- The engineering strategy and CUDA implementation (and open-source) would benefit the community and future research a lot.
- The paper is very well-written. Though I'm not an expert in this domain, it is easy to understand the storyline, technical details, related works, and intuition.
Weaknesses: - Some suggestions on Figure presentation.
- Figure 1. Also consider showing the speed advantage, as today's high-performance edge devices (Nvidia Xavier, Orin, etc) can accommodate large-weight networks and the weights of other works are already relatively small. Reporting that you can run inference the CIFAR-10 image in ~0.7 $\mu s$ on an FPGA chip would be very impressive even without looking into your paper. Besides, what is "Pareto-curve" (in the caption)? I didn't see it shown in the figure.
- Figure 2. Maybe change the input into some "flattened inputs" (L112) to better show that vanilla LGNs are not designed to process images.
- Figure 3. Maybe re-arrange the figure and get some space for the details of your structure. Better to also show how your network process "channels", as currently it is not clear from the figure. I also suggest adding some annotations, e.g., in the figure, adding the same notations like "depth d=2", the green squares are the selected Cm Ch Ck, in the caption also mention your NN can share weights like CNN in different spatial regions. Polishing this figure can help the reader understand the processing details quicker than understanding Eqn. (3).
- Some technical questions need to be better explained.
- L114-115, why are vanilla LGNs "very computationally expensive to train"? Is it because they didn't implement CUDA kernels?
- I noticed that in the design of the network, the authors chose a relatively small depth but large channels (2 vs 40,400,2000+). Is there any intuitive reason to do so? How many layers (depth) does vanilla LGN have?
- The authors have implemented CUDA kernel, but in the speed comparison (Table 2), the results are from Xilinx FPGA (I guess only has CPU). Why didn't the authors implement experiments on GPU? Is it for fair comparison w/ others? Maybe I missed something, but on CPU, what's the advantage of implementing CUDA kernel?
Technical Quality: 4
Clarity: 4
Questions for Authors: See weakenss.
Confidence: 4
Soundness: 4
Presentation: 4
Contribution: 4
Limitations: The authors could provide a paragraph discussing their potential limitation to solving more complex CV tasks involving continuous decisions. E.g., regressing boundaries of bounding boxes (Object Detection/Tracking), localization and mapping (SLAM), generative CV, etc.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you so much for your positive feedback, and for appreciating the feedforward gates during initialization, our ablation studies, our experimental performance, as well as our engineering and CUDA implementations.
We appreciate that you find our "paper is very well-written".
In the following, we address each of your recommendations, questions, and concerns.
> Some suggestions on Figure presentation. Figure 1. [...] Figure 2. [...] Figure 3. [...]
Thank you for each of these helpful suggestions. We will incorporate them into Figures 1, 2, and 3, as well as their captions.
> Some technical questions need to be better explained.
> L114-115, why are vanilla LGNs "very computationally expensive to train"? [...]
The primary reason for vanilla LGNs to be very computationally expensive to train is that they are bottlenecked by memory and cache read and write accesses during training.
In contrast, by utilizing the proposed convolutional structure, the parameter sharing enables loading fewer parameters into a cache that is shared between different cores of the GPU, requiring reading only once.
Moreover, by fusing the tree structure, and not storing any intermediate results in memory, expensive global memory writes are drastically minimized. If $d=2$ and we have a maxpool of $2\times 2$ fused to it, then 15 logic gates are executed during forward, and only a single output activation has to be stored. While this requires recomputation of intermediate results during backward, only one out of four paths through the pooling needs to be backpropagated through, and the choice of this path requires storing only 2 bits. This, combined, leads to a much higher utilization of the memory bandwidth, while at the same time reducing memory access requirements, and drastically improving the utilization of actual compute units in the GPU.
Furthermore, the sparsity pattern as introduced by using convolutions is also more favorable for memory accesses.
Beyond this, we have made contributions to faster training of vanilla LGNs, and, e.g., reduced memory access from reading 18 floats down to reading 6 floats by precomputing coefficients in a simplification of Eq. (1). In particular, Eq. (1) can be rewritten as $u_0 \cdot a_1 + u_1 \cdot a_2 + u_2 \cdot a_1 \cdot a_2 + u_3 \cdot 1$ for a certain set of $u_0, u_1, u_2, u_3$, which we precompute in a separate kernel, and thus only have to compute once for the entire batch. This constitutes another fundamental speedup that applies to both vanilla LGNs and convolutional LGNs, reducing both memory and compute requirements.
> I noticed that in the design of the network, the authors chose a relatively small depth but large channels (2 vs 40,400,2000+). Is there any intuitive reason to do so? How many layers (depth) does vanilla LGN have?
The intuitive reason for the large number of channels compared to the depth is that the network is very sparse and only uses logic, and thus the expressivity of each channel is smaller (compared to a conventional CNN). Thus the model requires more channels in order to attain high overall expressivity.
The depth of 2 that you mention refers to the depth of each convolutional block. With 4 convolutional blocks and 2 randomly connected layers for the head, the total trainable depth of our model is 10 layers. Including the or-pooling, we have a total of 18 layers.
The best performance with vanilla LGNs is achieved with 4-6 layers. [7] reported trying up to 8 layers, and from our own experiments we can confirm that vanilla LGNs with 8 or more layers converge extremely slowly and to lower accuracies. The best vanilla LGN MNIST model uses 6 layers and the best vanilla LGN CIFAR-10 model uses 5 layers. The best vanilla LGN for CIFAR-10 requires 1,024,000 neurons per layer.
Thus, our CIFAR-10 model has 2x the trainable depth and 3.6x the total depth compared to the vanilla LGN, while having substantially fewer channels.
> The authors have implemented CUDA kernel, but in the speed comparison (Table 2), the results are from Xilinx FPGA (I guess only has CPU). Why didn't the authors implement experiments on GPU? Is it for fair comparison w/ others? Maybe I missed something, but on CPU, what's the advantage of implementing CUDA kernel?
To clarify, while all training is performed on GPU (as it requires float operations), the inference is performed on FPGAs or CPUs as it only requires bitwise logical operations.
While we could have also run inference on GPU, GPUs are highly optimized for float operations and rather neglect bitwise logics; further, for GPUs the speed of transferring input data would be the bottleneck.
As FPGAs are effectively slow proxies of ASICs, utilized in hardware design, they are the closest one can get to ASICs without actually manufacturing ASICs.
> The authors could provide a paragraph discussing their potential limitation to solving more complex CV tasks involving continuous decisions. E.g., regressing boundaries of bounding boxes (Object Detection/Tracking), localization and mapping (SLAM), generative CV, etc.
Indeed, we have not explored CV tasks involving continuous decisions.
Continuous decisions, as the ones listed, are a great direction for future work, and we have included a paragraph in the camera-ready to highlight this. | Rebuttal 1:
Rebuttal: We would like to thank all reviewers for their time and valuable comments, which have helped us improve our paper.
We respond to each of your questions and concerns individually below.
Moreover, we would like to highlight the following additions:
* We are now providing standard deviations for our smaller models, and have started training additional seeds for our larger model, which we will include in the camera-ready.
* Inspired by Reviewer zmod's comments, we have trained a larger and deeper model on MNIST, achieving 99.24±0.06% that requires only 1.82 M gates, achieving the best accuracy for logic gate and binary networks overall.
* In the author response PDF, we provide an illustration of the learned distributions over logic gates, comparing residual and Gaussian initializations.
* Finally, we added an ablation study wrt. $z_3$.
Pdf: /pdf/e8daf9107c7ed630e8be4e076f854f16184e1759.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
On Learning Multi-Modal Forgery Representation for Diffusion Generated Video Detection | Accept (poster) | Summary: To detect video forensics, the authors propose an innovative Multi-Modal Forgery Representation (MMFR) to discriminate fake videos from real ones. Besides, the authors establish a high-quality dataset including videos generated from various diffusion-based algorithems to evaluate the effectiness of the proposed detector.
Strengths: (1)The authors propose a video-level detection algorithm, named MM-Det, to capture forgery traces based on an LMM-based representation for the generalizable fake video detection. MM-Det utilizes perception and reasoning capabilities from LMMs through multi-turn conversations to learn a generalizable forgery feature.
(2)The authors extent reconstruction representation for detecting diffusion images into the video domain to amplify diffusion artifacts both in spatial and temporal information.
(3)The authors are the first to establish a comprehensive dataset for diffusion-generated videos, named DVF. The proposed detector achieves promising detection performance on a wide spectrum of forgery videos.
Weaknesses: (1) I didn't understand how the authors use VQVAE to amplify diffusion features, and it seems that this reconstruction branch is not shown in Figure 3.
(2) There have been some works [1,2] that incorporate temporal token in self-attention, is this different from IAFA? What are the advantages of the proposed method?
[1] Wang J, Yang X, Li H, et al. Efficient video transformers with spatial-temporal token selection[C]//European Conference on Computer Vision. Cham. Springer Nature Switzerland, 2022: 69-86.
[2] Liu Y, Xiong P, Xu L, et al. Ts2-net: Token shift and selection transformer for text-video retrieval[C]//European conference on computer vision. Cham: Springer Nature Switzerland, 2022: 319-335.
(3) As shown in Section 4.2, the authors collected a training set and a test set to evaluate the performance of different methods. What is the relationship between these data and DVF? The article does not seem to describe the benefits of the DVF, e.g., how much performance improvement does the DVF bring to the authors' method, and how much performance improvement does the DVF bring to other methods?
(4) Forgery video detection is challenging task, however, the authors' experimental content is too small and simple, only 2 tables of experimental content is difficult to fully verify the effectiveness of the proposed method. The authors need to do some other types of experiments to further validate the performance of the method.
Technical Quality: 2
Clarity: 3
Questions for Authors: From the visualization results, the colors of the fake image are more colorful, is the author's method still valid if the brightness is adjusted to make them similar to the real image?
Confidence: 4
Soundness: 2
Presentation: 3
Contribution: 3
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the careful and helpful feedback. We appreciate all constructive comments on the novelty, clarity, and valuable contribution of the paper. We demonstrate additional experiments and answer all weaknesses and questions mentioned in the review.
> **Q1** The usage of VQVAE
As for diffusion detection, a series of previous works[1,2] found that the reconstruction process of autoencoders extracts discriminative features for diffusion. Following [2], our work introduces a pretrained vector quantized variational autoencoder (VQVAE) for an augmentation to diffusion forgery features. We display the contribution of VQVAE reconstruction in Figure B of our PDF material.
For brevity and legibility, we do not visualize an explicit reconstruction process in Figure 3 of our paper, as it is not the main contribution of our method. We have provided a detailed explanation of this process in Section 3.2, Lines 147-158 of our paper.
> **Q2** Analysis of IAFA
We provide analysis and experiments in Q1 of the overall response.
> **Q3**: The relationship between experiment datasets and our proposed DVF
Both training and testing datasets are subsets of DVF. They do not overlap with each other and form the entire DVF together.
Specifically, the dataset DVF includes real videos from Internvid-10M and fake videos from 8 diffusion methods. In experiments, we choose one diffusion method, Stable Video Diffusion, as the training sets, and evaluate all baselines on the other 7 diffusion methods.
> **Q4**: Contribution of DVF
We provide additional experiments to prove the contribution of DVF on performance in Table I. For our method, we emphasize the contribution of DVF in finetuning LMMs. For other methods, DVF provides diffusion traces for forgery detection. We prove the effectiveness of DVF by comparing pretrained baselines with ones fine-tuned on DVF, for which we choose UnviersalFD[5] and F3Net[6], and comparing our method between pretrained and fine-tuned LMMs.
**Table I**: Performance of pretrained and fine-tuned detectors on DVF in terms of AUC
|Model|Videocrafter|Zeroscope|Sora|Pika|Opensora|Stable Diffusion|Stable Video|Average|
|:--|:--:|:--:|:--:|:--:|:--:|:--:|:--:|:--:|
|UniversalFD(Pretrained)|$59.2$|$69.4$|$65.6$|$65.2$|$61.7|$81.2$|$59.2$|$65.9$|
|UniversalFD(Finetuned)|$93.6$|$90.1$|$85.4$|$93.0$|$83.9$|$81.5$|$87.9$|$87.9$|
|F3Net(Pretrained)|$60.1$|$65.2$|$60.7$|$63.6$|$59.2$|$65.8$|$61.3$|$62.3$|
|F3Net(Finetuned)|$96.1$|$91.8$|$66.0$|$95.6$|$85.9$|$86.3$|$96.0$|$88.2$|
|Ours(w/Pretrained LMM)|$95.3$|$95.2$|$89.6$|$94.3$|$90.2$|$86.6$|$96.7$|$92.6$|
|Ours(w/Finetuned LMM)|$99.2$|$98.4$|$97.4$|$95.5$|$99.4$|$98.0$|$98.4$|$98.0$|
As is shown, DVF benefits other baselines in $+22.0\\%$ and $+25.9\\%$ in AUC. For our method, the LMM finetuned on DVF also outperforms pretrained one by $+5.4\\%$ in AUC.
> **Q5**: More experiments on validation of our method
We provide further analysis and experiments to validate our method. In the overall response, we provide
1) the functionality of IAFA compared with other baselines in Q1,
2) the generalization of our method on GAN and latest diffusion methods in Q2,
3) the performance of our method on different video durations and resolutions in Q2,
4) the ablation study on different LLMs in Q3.
In our PDF material, we provide
1) Figure A: attention heatmaps of our proposed IAFA compared with other spatiotemporal networks.
> **Q6**: From the visualization results, the colors of the fake image are more colorful, is the author's method still valid if the brightness is adjusted to make them similar to the real image?
Our method is effective after adjusting the brightness. We provide more visualization results of our LMMs on both real and fake content with similar brightness in Figure C of our PDF material.
**References**
[1] Wang Z, Bao J, Zhou W, et al. Dire for diffusion-generated image detection
[2] Ricker J, Lukovnikov D, Fischer A. AEROBLADE: Training-Free Detection of Latent Diffusion Images Using Autoencoder Reconstruction Error
[3] Wang J, Yang X, Li H, et al. Efficient video transformers with spatial-temporal token selection
[4] Liu Y, Xiong P, Xu L, et al. Ts2-net: Token shift and selection transformer for text-video retrieval
[5] Ojha U, Li Y, Lee Y J. Towards universal fake image detectors that generalize across generative models
[6] Qian Y, Yin G, Sheng L, et al. Thinking in frequency: Face forgery detection by mining frequency-aware clues
---
Rebuttal Comment 1.1:
Comment: The author did additional experiments to solve my concerns, so I have increased my score to 6 (weak accept).
---
Rebuttal 2:
Title: Thank you, Reviewer Niir
Comment: Dear Reviewer Niir,
We are glad that our additional experiments solved your concerns. Thanks again for your valuable review.
Best regards,
Authors of Submission 2768 | Summary: This work focuses on diffusion model detection, a core and popular research topic recently. It identifies limitations in previous studies that concentrate on fake face and image-level detection and explores the idea of using recent LMM to detect forgery. The proposed Multi-modal Forgery Representation (MMFR) leverages existing LMM and introduces a new in-and-across frame attention mechanism. Additionally, a new dataset is proposed, and empirical results demonstrate the effectiveness of the proposed methods.
Strengths: 1. The motivation behind this work is clear, and the idea of using LMMs for image forensics is novel. This idea potential pioneers a new research direction.
2. Constructing a large-scale diffusion video dataset for the community is a reasonable endeavor and can benefit the entire research community.
3. I personally appreciate the analysis in section 4.4, which explains how the authors designed the MMFR and utilized outputs from specific layers of LLama.
4. The main result suggests the proposed method achieves SoTA video-level performance and can be used as the baseline method for the future works.
Weaknesses: 1. To me, the contribution of in-and-across frame attention seems limited. Providing a comparison to other video-level ViTs and more details about its differences from other works regarding ViT would make this contribution more convincing.
2. The DE-FAKE [R1] method also adopts multi-modal representations, but this work is neither discussed nor compared in the paper.
[R1: DE-FAKE: Detection and Attribution of Fake Images Generated by Text-to-Image Generation Models]
3. The proposed dataset is a significant contribution of the paper. Therefore, instead of individual frames, it would be more convincing to include sample video frames either in the supplementary material or on the project page.
4. The main performance table needs more details and a clarification on why it contains both frame and video comparisons.
Technical Quality: 4
Clarity: 3
Questions for Authors: Please refer to the weakness section.
1. The overall work is novel and demonstrates strong empirical performance, but I have minor concerns regarding its contributions, specifically the in-and-across attention mechanism and the dataset.
2. Additionally, more related work should be discussed and compared to provide a comprehensive context.
Confidence: 5
Soundness: 4
Presentation: 3
Contribution: 4
Limitations: Yes, the authors adequately discussed the limitations and societal impact of their work in Appendix.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for all insightful comments and suggestions. We appreciate the appraisal of the contribution, soundness, and clarity of the paper. We provide additional experiments and answer the weaknesses and questions in the review.
> **Q1**: Comparison with other spatiotemporal networks
We provide a comparison of IAFA with other ViT baselines and analyze our advantages in Q1 of the overall response.
> **Q2**: Discussion and comparison with other multimodal baselines
In previous studies on forgery detection, DEFAKE[1] adopts multimodal representation for forgery detection by using semantic descriptive prompts as an augmentation for visual features. However, semantic descriptions do not have strong correlations with the authenticity of images, which is ineffective in forgery tasks. As a comparison, our method utilizes the powerful reasoning ability of finetuned Large Multimodal Models (LMMs) to provide more accurate features in the text space. Therefore, our method is more effective in forgery detection.
We provide a comparison of DEFAKE[1] with our method in Table H. Each model is trained on the same training dataset SVD, and evaluated on other diffusion datasets in our proposed dataset DVF.
**Table H**: Comparison between DEFAKE and our method in AUC
|Model|Videocrafter|Zeroscope|Sora|Pika|Opensora|Stable Diffusion|Stable Video|Average|
|:--|:--:|:--:|:--:|:--:|:--:|:--:|:--:|:--:|
|DEFAKE(SIGSAC2023)|$72.3$|$70.3$|$67.3$|$88.4$|$53.6$|$86.0$|$74.1$|$73.1$|
|Ours|$99.2$|$98.4$|$97.4$|$95.5$|$99.4$|$98.0$|$98.4$|$98.0$|
As is demonstrated, our proposed multimodal representation is effective in most diffusion forgery content.
> **Q3**: Visualization of DVF
We will release sample video frames from DVF instead of individual frames for better visualization.
> **Q4**: A clarification of the main table
To justify the effectiveness of our method both on the image-level and video-level detection, the main performance table contains both frame and video comparisons. For frame-level detection, frames from each video are treated individually for training and testing to demonstrate the generalization ability of our proposed Multi-Modal Forgery Representation(MMFR) without spatio-temporal information. For video-level detection, continuous video clips are forwarded during training and evaluation. In this case, both multimodal representation and spatiotemporal information take effect. We use these experiment settings to validate the effectiveness of MMFR and IAFA.
> **Q5**: Discussion and comparison with more works
We provide additional analysis and experiments regarding more related works.
In the overall response, we provide
1) the functionality of IAFA compared with other baselines in Q1,
2) the generalization of our method on GAN and latest diffusion methods in Q2,
3) the performance of our method on different video durations and resolutions in Q2,
4) the ablation study on different LLMs for our method in Q3.
In the PDF material, we provide
1) Figure A: Visualization on attention heatmaps from IAFA.
**References**
[1] Sha Z, Li Z, Yu N, et al. De-fake: Detection and attribution of fake images generated by text-to-image generation models
[2] Arnab A, Dehghani M, Heigold G, et al. Vivit: A video vision transformer
---
Rebuttal Comment 1.1:
Comment: Thank you for the clear rebuttal. My concerns have been addressed. I have increased my score to 7 (accept). It would be good to include some of clarifications provided above into the revised version.
---
Rebuttal 2:
Title: Thank you, Reviewer LBha
Comment: Dear Reviewer LBha,
We are glad that our rebuttal has successfully addressed your concerns. We will include the clarifications into the revised manuscript. Thank you again for your valuable comments.
Best regards,
Authors of Submission 2768 | Summary: This manuscript presents a new approach to detect fake videos generated with diffusion models. This approach is based on multimodality analysis and reports promising results on a database introduced by the author(s). These results are based on both frame and video levels. This work is well written and organized.
Strengths: This poropsed approach sounds like a good novelty and shows its effectiveness with a good ablation study. As the type of generators increases, the methods can protect the users from the generated videos with fraudulent aspects in the real world. The users can verify the videos with the help of the methods.
Weaknesses: A clear justification for the complex approach is not provided, and readers expect more details.
I think that the method is limited in a global application to the recognition of videos generated with diffusion models and that we should use other methods for other types.
Technical Quality: 3
Clarity: 3
Questions for Authors: How can you add the method to a general method to detect all or most forgery video types?
What do you think will happen if you have created a video using other methods such as GAN methods?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: These limitations are addressed by the author(s).
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We express our sincere gratitude to the reviewer for the insightful and valuable feedback. We appreciate all constructive comments on the novelty, effectiveness, and clarity of the paper. We provide additional experiments and answer the weaknesses and questions mentioned in the review as follows.
> **Q1**: A clear justification on MM-Det
In our paper, We propose Multi-Modal Detection (MM-Det) for forgery detection. Briefly, the method integrates Multi-Modal Forgery Representations (MMFR) in the LMM branch with highly effective spatiotemporal features in the ST branch. In addition, A dynamic fusion strategy is applied to aggregate these features and adjust feature weights. Our proposed In-and-Across Frame Attention (IAFA) benefits our proposed method in effectively capturing forgery features. Besides, our method maintains a strong generalization ability on unseen forgery types with the help of large multimodal models.
We provide detailed analyses and experiments for the functionality and effectiveness of our proposed MMFR and IAFA in the overall response. Specially, we provide
1) the functionality of IAFA compared with other baselines in Q1,
2) the generalization of our method on GAN and latest diffusion methods in Q2,
3) the performance of our method on different video durations and resolutions in Q2,
4) the ablation study on different LLMs for our method in Q3.
> **Q2**: Generalization ability
We provide additional performance results on GAN and latest diffusion methods in Q2 of the overall response to prove the generalization ability of our method on other types of forgery methods.
> **Q3**: Improvements on more forgery content
Additional performance results on GAN and latest diffusion methods in Q2 of the overall response prove the effectiveness of our method, showing that our finetuned large multimodal model specializes in forgery detection and demonstrates generalization ability on most forgery types. For adaptation to more forgery content, enlarging the number and types of datasets and fine-tuning large multimodal models contributes to more promising performance.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response. I keep the rating.
---
Rebuttal 2:
Title: Thank you, Reviewer H1bu
Comment: Dear Reviewer H1bu,
Thank you for your recognition of our work. We greatly appreciate your valuable comments.
Best regards,
Authors of Submission 2768 | Summary: This paper proposes a method for detecting diffusion-generated fake videos using a multi-modal approach. Key contributions include:
A Multi-Modal Forgery Representation leveraging vision and language capabilities of large multimodal models
An In-and-Across frame attention to capture spatial-temporal forgery traces
A fusion strategy to combine multi-modal features
A new dataset of diffusion-generated videos for benchmarking
The method outperforms existing approaches on cross-dataset evaluation.
Strengths: Creation of a new dataset to address lack of benchmarks in this area
Use of LMMs for video forgery detection, leveraging their visual and language understanding Combination of frame-level and video-level features
Evaluation on multiple diffusion video generation methods
Weaknesses: The use of LMMs and complex Transformer architectures might impose high computational demands, which could limit practical deployment in resource-constrained environments.
Discussion on Failure Cases: The paper could benefit from a detailed discussion on scenarios where the proposed method fails or underperforms, which could guide future improvements and research.
Limited discussion of computational requirements and inference speed
Limited exploration of how the method generalizes to non-diffusion generated videos
it is unclear how the system performs under different operational conditions or with lower-quality videos.
Technical Quality: 3
Clarity: 3
Questions for Authors: How does the computational cost and inference speed compare to existing methods?
How well does the method generalize to detecting non-diffusion generated fake videos?
Could the authors discuss the transferability of the proposed MMFR to other forms of synthetic media detection, such as audio or text?
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Potential biases in the dataset
Scalability to longer videos or different resolutions
Adaptability to rapidly evolving diffusion models
Potential for overfitting to current generation artifacts
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the valuable and insightful feedback. We appreciate all constructive comments on the novelty, soundness, and clarity of the paper. Here, we demonstrate additional experiments and answer the weaknesses and questions mentioned in the review.
> **Q1**: Computational analysis
As for computational requirements, our method is implemented on a single 4090 GPU with 24G memory for both training and inference. We provide further computational analysis and inference speed for our methods and other baselines in Table F.
**Table F**: Comparison of computational cost and inference speed
|Model|GFLOPs|Params|FPS|
|--|--|--|--|
|ViViT[1]|$84.1$|$26.4M$|$1201$|
|TALL[2]|$30.4$|$86.6M$|$1445$|
|UniversalFD[3]|$1556.4$|$304.5M$|$93$|
|MM-Det(Ours)|$9345.2$|$6919M$|$40$|
Compared with other baselines, our method reaches 6x GFLOPs and 23x Params of UniversalFD[3], which also uses a CLIP encoder for detection. As for inference speed, we conduct video-level inference at 40 fps. Regarding the computational cost and efficiency, we argue that the bottleneck of our method lies in the integration of the large multimodal model, which makes up 94% FLOPs in inference and 98% Params in total. With the improvement of LMM techniques and the appearance of more computational-friendly LMMs, such limitations will be solved. In addition, LMM inference on cloud service is available in practical deployment. Therefore, we hold the opinion that the computational cost of our method will be relieved to a large extent in the future. Further discussion is beyond the scope of our paper.
> **Q2**: Generalization ability
We report the performance of our method on multiple GAN-based methods in Q2 of the overall response, where we provide extensive experiments and results on
1) Generalization ability on non-diffusion videos,
2) Generalization ability on evolving diffusion videos,
3) Scalability to multiple durations and resolutions.
With all promising performance, we prove that our method is effective in most cases.
> **Q3**: Robustness analysis
We provide additional experiments for robustness analysis in Table G.
**Table G**: Performance of MM-Det on multiple post-processing in AUC
|N/A|Blur $\\sigma=3$|JPEG $Q=50$|Resize $0.7$|Rotate $90$|Mixed|
|:--:|:--:|:--:|:--:|:--:|:--:|
|$95.5$|$89.2$|$93.2$|$91.7$|$92.1$|$91.9$|
Our method remains effective under all such post-processing conditions.
> **Q4**: Transferability to other modalities
Our proposed MMFR takes advantage of feature spaces from multimodal encoders and text decoders in LMMs to form an effective representation for forgery detection tasks, which is a transferable method to other media. As long as the LMM is adapted to downstream tasks in audio or text, features from the encoder and decoder stand for a strong representation of unseen synthetic types. Therefore, our method is generalizable to other modalities.
> **Q5**: Discussion on failure cases
While our paper makes significant strides in addressing diffusion video detection, our exploration does not extensively cover partially manipulated content. For diffusion manipulations that only happen in small areas, our method may fail to capture informative features. A possible reason is that small forgery traces disappear after multiple downsampling operations in the deep network of LMMs. Future research endeavors may benefit from investigating the limitation to tackle the challenge of minor forgery detection.
**References**
[1] Arnab A, Dehghani M, Heigold G, et al. Vivit: A video vision transformer
[2] Xu Y, Liang J, Jia G, et al. Tall: Thumbnail layout for deepfake video detection
[3] Ojha U, Li Y, Lee Y J. Towards universal fake image detectors that generalize across generative models | Rebuttal 1:
Rebuttal: We appreciate all reviewers for their valuable comments and suggestions. We are delighted to see (a) all reviewers give positive feedback, (b) all reviewers recognize our proposed MMFR's novelty, (c) our MMDet achieves promising and generalizable performance on diffusion forgery detection (f8wk, H1bu, LBha, Niir), (d) a comprehensive dataset is proposed for future work (X8oW, f8wk, LBha, Niir) and (e) the insightful ablation study has been appreciated (H1bu). Moreover, we receive concerns about (a) illustration of the functionality of IAFA (X8oW, LBha, Niir), (b) MM-Det's generalization ability to more forgeries (f8wk, H1bu, LBha, Niir), and (c) additional justification on our MM-Det (X8oW, H1bu, LBha, Niir). Therefore, we provide precise answers to all questions, which mainly include 1) additional analysis and experiments on IAFA, (2) experiments on the generalization ability of MM-Det, and (3) an ablation study on LLMs.
Additionally, please check our PDF attachment for more figures.
> **Q1**. Analysis and experiments of In-and-Across-Frame Attention
Spatio-temporal networks[1-3] have been discussed in the field of video-level tasks, such as video understanding and retrieval. These works utilize spatial and temporal attention to capture video-level information and represent the global attributes of videos. However, It is worth noticing that AI-generated videos often contain inconsistent frames, giving rise to forgery artifacts that happen on small periods or even single frames. This property makes it difficult for conventional video methods to capture local artifacts. To address this problem, we propose our effective attention mechanism IAFA, which specializes in local information aggregation.
The advantage of our IAFA over other baselines lies in that IAFA preserves local information at the frame level when conducting spatiotemporal attention, thus being effective in learning forgery features. Specifically, an additional temporal token is introduced in each frame for aggregation of frame-level forgery features. During forward propagation, our designed IAFA conducts in-and-across-frame attention to model both local and global forgeries at each frame consecutively.
To prove the effectiveness of IAFA, we compare the detection performance of IAFA with other spatiotemporal baselines in Table A. Our IAFA is based on a Hybrid ViT[4], and we select TS2-Net[1], ViViT[2], TALL[3], and Hybrid ViT[4] for comparison. Each model is trained on the same training dataset SVD from our proposed DVF, and evaluated on the rest diffusion datasets.
**Table A**: Comparison between IAFA and spatiotemporal baselines in AUC
|Model|Videocrafter|Zeroscope|Sora|Pika|Opensora|Stable Diffusion|Stable Video|Average|
|:--|:--:|:--:|:--:|:--:|:--:|:--:|:--:|:--:|
|Hybrid ViT|$72.3$|$70.3$|$67.3$|$88.4$|$53.6$|$86.0$|$74.1$|$73.1$|
|ViViT(CVPR2021)|$89.2$|$88.0$|$81.6$|$92.7$|$85.2$|$88.1$|$92.1$|$88.1$|
|TS2-Net(ECCV2022)|$60.7$|$72.0$|$81.0$|$80.2$|$74.3$|$60.2$|$80.2$|$72.7$|
|TALL(CVPR2023) |$76.5$|$61.8$|$62.3$|$79.9$|$69.8$|$85.9$|$64.8$|$71.6$|
|Hybrid ViT w/IAFA(Ours)|$94.4$|$94.2$|$82.0$|$95.4$|$82.0$|$92.8$|$93.9$|$90.6$|
Overall, our proposed IAFA achieves the best performance in the evaluation of unseen forgeries.
To further validate the functionality of IAFA on the preservation of local information, we provide the comparison of attention heat maps between IAFA and ViViT[2] in Figure A of our PDF material.
> **Q2**: Generalization ability
We provide an extensive performance of MM-Det on more generated videos in Table B. We choose 4 GAN-based methods[5-8] and one diffusion tool Kling (released on July 8th) for comparison. It is worthwhile noting that Kling was released after our submission.
**Table B**: Performance on GAN and diffusion videos in AUC
|StyleGAN-V[5]|StyleSV[6]|StyleSV-MTM[7]|TATS[8]|Kling| Average|
|:--:|:--:|:--:|:--:|:--:|:--:|
|$97.2$|$95.6$|$99.8$|$99.9$|$99.8$|$98.5$|
Our method gains competitive results, proving the generalization ability on unseen forgery types.
In addition, we report the performance of our method regarding video lengths and resolutions in Tables C and D.
**Table C**: Scalability on resolution in AUC
|1920x1080|1280x720|1024x576|512x512|
|:--:|:--:|:--:|:--:|
|$97.0$|$96.8$|$94.4$|$97.2$|
**Table D**: Scalability on duration in AUC
|[0, 2)s|[2, 10)s|[10, 20)s|>20s|
|:--:|:--:|:--:|:--:|
|$94.1$|$96.6$|$97.5$|$96.8$|
Our method is generalizable in multiple durations and resolutions.
> **Q3**: Ablation study on LLMs
We conduct an extensive ablation study on the influence of LLMs in Table E. We choose Vicuna-7b, Vicuna-13b and Mistral-7b as the LLM backbone.
**Table E**: Ablation study on LLMs in AUC
|LLM|Videocrafter|Zeroscope|Sora|Pika|Opensora|Stable Diffusion|Stable Video|Average|
|:--|:--:|:--:|:--:|:--:|:--:|:--:|:--:|:--:|
|N/A|$94.4$|$94.2$|$82.0$|$95.4$|$82.0$|$92.8$|$93.9$|$90.6$|
|Vicuna-7b|$99.2$|$98.4$|$97.4$|$95.5$|$99.4$|$98.0$|$98.4$|$98.0$|
|Vicuna-13b|$98.4$|$98.9$|$97.4$|$95.4$|$99.5$|$97.6$|$98.8$|$98.0$|
|Mistral-7b|$98.1$|$98.5$|$95.6$|$96.6$|$99.4$|$96.3$|$98.9$|$97.6$|
Overall, our proposed method is effective in different LLMs.
**References**
[1] Liu Y, Xiong P, Xu L, et al. Ts2-net: Token shift and selection transformer for text-video retrieval
[2] Arnab A, Dehghani M, Heigold G, et al. Vivit: A video vision transformer
[3] Xu Y, Liang J, Jia G, et al. Tall: Thumbnail layout for deepfake video detection
[4] Dosovitskiy A, Beyer L, Kolesnikov A, et al. An image is worth 16x16 words: Transformers for image recognition at scale
[5] Skorokhodov I, Tulyakov S, Elhoseiny M. Stylegan-v: A continuous video generator with the price, image quality and perks of stylegan2
[6] Zhang Q, Yang C, Shen Y, et al. Towards smooth video composition
[7] Yang C, Zhang Q, Xu Y, et al. Learning modulated transformation in GANs
[8] Ge S, Hayes T, Yang H, et al. Long video generation with time-agnostic vqgan and time-sensitive transformer
Pdf: /pdf/dd22a10afcd0f645cc6c42b7c5aac3defae660dd.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | Summary: This paper proposed a video-level detection algorithm, named Multi-Modal Detection (MM-Det) for video forensics. MM-Det consists of Multi-Modal Forgery Representation (MMFR) that discriminates fake videos from real ones, In-and-Across Frame Attention (IAFA) that balances frame-level forgery traces with information flow across frames, and Dynamic Fusion Strategy for augmentation in features of high correlation and suppression for helpless ones integrates perception and reasoning abilities for video forensic work. In addition, this paper establishes a high-quality dataset including videos generated from various diffusion-based algorithms. Evaluation of several benchmarks confirms the effectiveness of MM-Det on general content from unseen diffusion models.
Strengths: 1. The proposed multimodal representation fusion, In-and-Across Frame Attention, and Dynamic Fusion Strategy appear valid and reasonable.
2. A new data set is proposed which is valuable for video forensics research.
Weaknesses: 1. As for LLM, it isn't easy to see which module represents LLM from the paper and Figure 3. If LLM is decoupled and divided into several modules, the article should give explanations.
2. The author declares that IAFA conducts global and local attention alternately. However, the paper lacks adequate analysis and experiments to illustrate the functionality of IAFA.
3. Besides, LLM plays an important role in MM-Det. I think conducting an ablation study on LLM is necessary to see if different LLMs influence detection performance.
Technical Quality: 3
Clarity: 4
Questions for Authors: 1. Which part in Figure 3 denotes LLM?
2. Analysis and experiment to illustrate the functionality of IAFA.
3. Ablation study on LLM.
Confidence: 4
Soundness: 3
Presentation: 4
Contribution: 4
Limitations: The author has fully discussed the limitations of this work and pointed out the direction of future work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We extend our gratitude to the reviewer for the insightful feedback. We appreciate the constructive positive comments on the novelty, soundness, and valuable contribution of our paper. In response, we present additional experiments and address the weaknesses and questions highlighted in the review.
> **Q1** As for LLM, it isn't easy to see which module represents LLM from the paper and Figure 3.
We provide a more concrete explanation here. In our MM-Det framework, we apply a Large Multimodal Model to capture multimodal forgery representation, whose modules and outputs are denoted in dark blue in Figure 3 of our paper. The composition of the LMM branch is a typical structure of a multimodal large language model, including a tokenizer and an embedding layer for text input, a visual encoder for image input, and a transformer decoder layer as the language backbone.
> **Q2** If LLM is decoupled and divided into several modules, the article should give explanations.
Our proposed method obtains multimodal forgery representations (MMFR) by taking advantage of pretrained visual encoder and language decoder backbone from a LMM. Specifically, given an LMM composed of a vision encoder denoted as $\\mathcal{E}_v$, a text encoder denoted as $\\mathcal{E}_t$, and a language decoder backbone denoted as $\\mathcal{D}_t$. The input is a video sequence $\\{\\mathbf{x}\\}^N$ of $N$ frames and an instruction prompt $\\mathbf{p}$ from our forgery templates.
We capture informative multimodal features $\\mathbf{F}_m$ by conducting text generation and extracting hidden states from the last layer of $\\mathcal{E}_v$ and $\\mathcal{D}_t$, which are feature representations in both visual and textual embedding space. This process can be expressed as:
\begin{equation}\\mathbf{F}_v =\\mathcal{E}_v(\\mathbf{x})\\end{equation}
\\begin{equation}\\mathbf{F}_t =\\mathcal{D}_t(\\mathbf{F}_v, \\mathcal{E}_t(\\mathbf{p}))\\end{equation}
\\begin{equation}\\mathbf{F}_m=\\{\\mathbf{F}_v, \\mathbf{F}_t\\}\\end{equation}
where $\\mathbf{F}_v$ and $\\mathbf{F}_t$ denote features in the visual and textual embedding space, respectively.
> **Q3** Illustration of the functionality of IAFA
We provide further analysis and demonstrate additional experiments in Q1 of the overall response.
> **Q4** An ablation study on LLMs
We provide an extensive ablation study on the influence of LLMs in Q3 of the overall response.
---
Rebuttal Comment 1.1:
Comment: The additional experiments solved my concerns, so I have increased my score to 6 (weak accept).
---
Rebuttal 2:
Title: Thank you, Reviewer X8oW
Comment: Dear Reviewer X8oW,
We are glad that our rebuttal has addressed your concerns. Thanks again for your valuable comments.
Best regards,
Authors of Submission 2768 | null | null | null | null | null | null |
AttnDreamBooth: Towards Text-Aligned Personalized Text-to-Image Generation | Accept (poster) | Summary: This paper proposes a new method for personalized image generation, decompose the personalization process into three training stages and introducing a cross-attention map regularization term.
Strengths: The manuscript is well-written.
The authors propose to address the intrinsic issues of two classical and up-to-date methods, Textual Inversion and DreamBooth, by aligning the new concept from the prompt while preserving the original subject identity. This approach tackles a significant problem.
Weaknesses: 1. Some parts of the manuscript are slightly verbose. For instance, the introduction section already introduces the existing problems of Textual Inversion and DreamBooth, along with their corresponding analysis and naïve solution. However, these points are reiterated in Section 4.1 and the first paragraph of Section 4.2 without adding any new information.
2. What does the term 'this approach' refer to in Line 174?
3. In general, the proposed method tends to be a bag of tricks with some customized hype-parameters.
4. Important comparisons are lacking in the manuscript, specifically the comparison with Textual Inversion (TI) and DreamBooth (DB) individually, in addition to the comparison with the combined approach of TI+DB.
5. This method uses cross-attention map for regularization, which results in a high time cost; on a NVIDIA A100, training requires 660 steps, taking 20 minutes. In contrast, DreamBooth requires only 5 minutes to train for 1000 steps.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Do the constraints imposed between [V] and [super-category] across all layers in Eq. 2 potentially restrict the diversity? Considering that [super-category] represents the original identity information and [V] introduces diversity.
2. Why was SD2.1 chosen, and how do other models perform?
3. In line 197, it is mentioned that retaining the prior preservation loss leads to poor results. Can you provide experimental results to support this?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to thank the reviewer for the constructive comments. We response the reviewer’s concerns as follows.
>**W1: The problems and analysis of Textual Inversion and DreamBooth introduced in the Introduction section are reiterated in Section 4.1**
Thanks for this point. The current version of Section 4.1 provides a more detailed problem analysis of existing methods than the Introduction section. We agree that minimizing repetition can enhance the clarity and conciseness of the paper. To improve conciseness, we will revise Sections 1 and 4.1 as follows: 1) remove the description of the naive solution from Section 1; 2) condense the analysis of existing methods in Section 1; 3) streamline the problem description in Section 4.1; and 4) enhance Section 4.1 with a more detailed analysis and deeper insights.
>**W2: What does the term 'this approach' refer to in Line 174?**
In Line 174, "this approach" refers to Textual Inversion, which involves optimizing the input textual embedding of the text encoder to learn the new concept. Thank you for this point, and we will revise Lines 173-174 as: "To achieve this, we choose to optimize the input textual embedding of the text encoder, as Textual Inversion does, given that the text encoder manages the contextual understanding of the prompt. However, as analyzed in Section 4.1, this approach is prone to overfitting the textual embedding, resulting in an embedding misalignment issue."
>**W3: The proposed method tends to be a combination of existing methods.**
We would like to clarify the key contributions of the paper. First, we identify a crucial insight in text-to-image personalization: the textual embedding of the new concept should be aligned with and seamlessly integrated into existing tokens, unlike existing studies that mainly focus on learning the new concept itself. Second, based on this insight, we propose a three-stage personalization process that is more than just a combination of existing techniques. The first stage is designed to learn the embedding alignment while mitigating the risk of overfitting, thus we significantly reduce the optimization steps to just 60. This results in a very coarse attention map and identity for the new concept. Subsequently, we refine the attention map by fine-tuning the cross-attention layers in the second stage, followed by fine-tuning the U-Net to capture the concept's identity in the third stage. Third, we propose an attention map regularization that utilizes the super-category token to guide the attention map learning in a self-supervised manner. Thanks for pointing this out, and we will improve the statements about the contributions in the revised paper.
>**W4: Comparisons to Textual Inversion (TI) and DreamBooth (DB) individually.**
Thanks for the suggestion. Comparisons to TI and DB are provided in Figure R5 of the attached PDF. As shown, our model achieves superior performance in both identity preservation and text alignment. We will add this comparison to the revised paper.
>**W5: The proposed attention map regularization results in a high time cost. The method takes 20 minutes for 660 steps, while DreamBooth takes only 5 minutes for 1000 steps.**
Indeed, computational overhead is a limitation of our method. We would like to clarify that the attention map regularization is not the most time-comsuming aspect, adding an average of 67 seconds. Instead, the third training stage consumes the most time. To mitigate this, we developed an effective strategy to reduce the training time. Due to the limited text length, please refer to our global response for the details. Our fast version model significantly reduces the training time from 20 minutes to 6 minutes, while maintaining performance comparable to the original model. We thank FYJx for this point, and will definitely include the results of this fast version model in the revised paper!
>**Q1: Do the constraints imposed between [V] and [super-category] in Eq. 2 potentially restrict the diversity? Considering that [super-category] represents the original identity information and [V] introduces diversity.**
We assume that the "diversity" refers to the distinctive characteristics of the new concept [V], but please correct us if we are wrong. Our proposed constraint does not restrict the diversity of [V], as it does not enforce a strong constraint between [V] and [super-category]. Instead, we apply a flexible constraint that enforces similarity in the mean and variance of the attention map values. This strategy aims to encourage [V] to exhibit a level of concentration or dispersion in the attention map similar to that of [super-category].
>**Q2: Why was SD2.1 chosen, and how do other models perform?**
We empirically find that SD2.1 achieves better performance than SD1.5 for our method. SD2.1 is widely used in text-to-image personalization models, such as AnyDoor [1], ADI [2], and IDAdapter [3]. A visual comparison between our models with SD1.5 or SD2.1 is presented in Figure R6 of the attached PDF. As shown, the model with SD2.1 achieves superior performance in text alignment and identity preservation. Nevertheless, our method is also effective for SD1.5, and it outperforms the baseline methods. We will add this discussion to the revised paper.
>**Q3: It is mentioned that retaining the prior preservation loss leads to poor results. Can you provide experimental results to support this?**
Thanks for pointing this out. We present a visual comparison between models with or without the prior preservation loss in Figure R7 of the attached PDF. The results show that incorporating the prior preservation loss leads to degradation in identity preservation. We will add this comparison to the revised paper.
[1] AnyDoor: Zero-shot Object-level Image Customization, 2023
[2] Learning Disentangled Identifiers for Action-Customized Text-to-Image Generation, 2023
[3] IDAdapter: Learning Mixed Features for Tuning-Free Personalization of T2I Models, 2024 | Summary: This paper proposes a method to enhance the performance of personalizing text-to-image models by appropriately combining textual inversion approach, which learns new embeddings, and DreamBooth approach, which fine-tunes model weights. They demonstrate the effectiveness of their approach through qualitative evaluation and human evaluation.
Strengths: * Research on appropriately combining textual inversion and DreamBooth methods has been needed in this field, and they propose a reasonable and clear method for this.
* The experimental results are good.
Weaknesses: * (minor) The proposed method seems quite similar to the Magicapture [1] approach in that it separates embedding learning and weight fine-tuning and conducts regularization on attention. However, this paper does not cover a comparison with Magicapture.
* (minor) The ablation study is not conducted quantitatively and relies on a single generation scenario.
[1] Hyung, Junha, Jaeyo Shin, and Jaegul Choo. "Magicapture: High-resolution multi-concept portrait customization." Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 38. No. 3. 2024.
Technical Quality: 3
Clarity: 3
Questions for Authors: When conducting analysis in Figure 1, how many steps was DreamBooth trained for to obtain the results? It seems that with sufficient training steps, DreamBooth might exhibit different results.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to thank the reviewer for the constructive comments. We response the reviewer’s concerns as follows.
>**W1: (minor) The proposed method seems quite similar to the Magicapture [1] approach in that it separates embedding learning and weight fine-tuning and conducts regularization on attention. However, this paper does not cover a comparison with Magicapture.**
Thanks for the suggestion to compare our method with MagiCapture [1]. Indeed, MagiCapture also employs a multi-stage learning strategy that first optimizes the textual embedding and then jointly finetunes the textual embedding and U-Net. However, our method differs in several key aspects. First, the motivation for our first stage (i.e., optimizing the textual embedding) differs from MagiCapture. We focus on learning the embedding alignment while mitigating the risk of overfitting, thus significantly reducing the optimization steps to just 60. In contrast, MagiCapture optimizes the textual embedding for 1200 steps in the first stage. Second, the attention regularization in MagiCapture is applied with the help of a user-provided mask, while we utilize the super-category token to guide the attention map learning in a self-supervised manner. Third, our learning process is divided into three stages: learning the embedding alignment, refining the attention map, and capturing the subject identity. Additionally, as MagiCapture is designed for integrating subject and style concepts, and our method focuses solely on personalizing subject concepts, a direct comparison of performance between these two methods is not feasible. We will include this comparison with MagiCapture in the revised paper.
>**W2: (minor) The ablation study is not conducted quantitatively and relies on a single generation scenario.**
Thank you for pointing this out. The table below presents the quantitative results of our ablation study. Specifically, the model without Stage 1 achieves better text alignment but significantly poorer identity preservation compared to the full model. This is because, without sufficient training of the textual embedding, the model tends to overlook the learned concept or generate it with significant distortions. Please note that the text alignment score is calculated without considering the new concept; therefore, omitting the new concept can inadvertently boost this score. Similarly, models without Stage 2 or Stage 3 also exhibit higher text alignment scores but lower identity preservation scores, due to insufficient learning of the attention maps and the subject identity, respectively. Additionally, the model without the regularization term shows degraded text alignment. Regarding qualitative evaluation, more generated images are provided in Figure 13 of the Appendix. We will include this quantitative ablation study and more generated images in the revised paper.
| Methods | Identity Preservation | Text Alignment |
| :---------------- | :------: | ----: |
| w/o Stage 1 | 0.7031 | 0.2595 |
| w/o Stage 2 | 0.7145 | 0.2541 |
| w/o Stage 3 | 0.6821 | 0.2650 |
| w/o Reg | 0.7269 | 0.2502 |
| Full Model | 0.7257 | 0.2532 |
>**Q1: When conducting analysis in Figure 1, how many steps was DreamBooth trained for to obtain the results? It seems that with sufficient training steps, DreamBooth might exhibit different results.**
In Figure 1, we follow NeTI [2] to perform 500 steps with a batch size of 4 for training DreamBooth. The results of training DreamBooth for more steps are provided in Figure R3 of the attached PDF. Indeed, for certain examples (e.g., "Manga drawing of a [V] can"), DreamBooth can generate text-aligned images after 1,000 training steps. However, for many other examples (as shown in Figure R3), DreamBooth still tends to overlook the new concept even after 5,000 training steps. In contrast, our method successfully generates text-aligned images for these prompts. We appreciate this point and will include this discussion in the revised paper.
**References**
[1] Junha Hyung et. al. "Magicapture: High-resolution multi-concept portrait customization". AAAI, 2024.
[2] Yuval Alaluf et. al. "A Neural Space-Time Representation for Text-to-Image Personalization". SIGGRAPH Asia, 2023. | Summary: The author proposed a method to generate high-quality personalized images. First, a textual embedding is learned, then the cross-attention layers is finetund to refine attention map during learning the textual embedding, finally, the entire U-Net is trained to capture the subject identity.
Strengths: 1. The paper is welll-written and easy to understand.
2. The proposed method achieve competitive results comparing with SOTAs.
3. Using the super-category attention map is an interesting idea to calibrate the [V] attention map in a self-supervised manner.
Weaknesses: 1. The proposed method needs a costly test-time optimization to generate personalized images. The 3 stage finetuning requires expensive computation (20 minutes in A100)
2. Even though the Attention Map Regularization idea is interesting, it seems to be the only critical contribution in this paper. The authors combine some techniques from off-the-shelf method including CostumDiffusion/DB/TI, and conducted them step by step with proposed -attention map regularization.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Where does the super-category label come from? Is it a part of the annotation?
2. I am curious how the TI+DB perform if it is optimized step by step like AttnDreamBooth, e.g. first tuning the textual embedding and then the U-Net or first tuning U-Net then optimizing the embedding.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to thank the reviewer for the constructive comments. We response the reviewer’s concerns as follows.
>**W1: The proposed method needs a costly test-time optimization to generate personalized images. The 3 stage finetuning requires expensive computation (20 minutes in A100).**
Indeed, computational overhead is a limitation of our method. To address this, we explored a simple yet effective strategy to reduce the training time. This involves increasing the learning rate while simultaneously decreasing both the training steps and the batch size for our third training stage, which is notably the most time-consuming phase. Specifically, the third stage of our original model performs 500 steps with a learning rate of 2e-6 and a batch size of 8. The fast version now completes training in just 200 steps with a learning rate of 1e-5 and a batch size of 4. This adjustment significantly reduces the training time from 20 minutes to 6 minutes on average. Interestingly, this fast version maintains performance comparable to our original model, likely because the first two stages provide a convenient starting point, allowing for a higher learning rate in the third stage. The qualitative evaluation is provided in Figure R4 of the attached PDF, and quantitative results are detailed in the table below. We observed that the fast version model performs very closely to the original model for short prompts (e.g., it even slightly outperforms the original model in the quantitative evaluation), but it slightly underperforms for complex prompts (e.g., the second and fourth examples in Figure R4).
| Methods | Identity Preservation | Text Alignment | Training Time |
| :---------------- | :------: | :----: | :----: |
| NeTI [1] | 0.6901 | 0.2522 | 13 minutes |
| Ours-fast | 0.7268 | 0.2536 | 6 minutes |
| Ours-original | 0.7257 | 0.2532 | 20 minutes |
We thank hyCc for this point and will definitely include these results of this fast version model in the revised paper!
>**W2: Even though the Attention Map Regularization idea is interesting, it seems to be the only critical contribution in this paper. The authors combine some techniques from off-the-shelf method including CostumDiffusion/DB/TI, and conducted them step by step with proposed attention map regularization.**
We appreciate your recognition of our attention map regularization's contribution. In addition to this regularization, we would like to clarify other key contributions of our paper. First, we identify a crucial insight in text-to-image personalization: the textual embedding of the new concept should be aligned with and seamlessly integrated into existing tokens, unlike existing studies that mainly focus on learning the new concept itself. Second, based on this insight, we propose a three-stage personalization process that is more than just a combination of existing techniques. The first stage is designed to learn the embedding alignment while mitigating the risk of overfitting, thus reducing the optimization steps to just 60. This approach results in a very coarse attention map and identity for the new concept. Subsequently, we refine the attention map by fine-tuning the cross-attention layers in the second stage, followed by fine-tuning the U-Net to capture the concept's identity in the third stage.
>**Q1: Where does the super-category label come from? Is it a part of the annotation?**
Yes, the dataset provides a coarse descriptor for each concept. In fact, many approaches, such as Textual Inversion, DreamBooth, and NeTI, also require a super-category label to initialize the textual embedding of the new concept or to provide prior knowledge. Regarding the use of the super-category label in our attention map regularization, this method does not necessitate a precise super-category label, as it does not enforce a strong constraint between the new concept and the super-category token. Instead, we impose a flexible constraint that enforces similarity in the mean and variance of the attention map values, as illustrated in Eq. (2) of the main paper. This strategy aims to encourage the new concept to exhibit a level of concentration or dispersion in the attention map similar to that of the super-category token.
>**Q2: I am curious how the TI+DB perform if it is optimized step by step like AttnDreamBooth, e.g. first tuning the textual embedding and then the U-Net or first tuning U-Net then optimizing the embedding.**
In Figure R5 of the attached PDF, we present the results of the two suggested settings for TI+DB: 1) first tuning the textual embedding and then the U-Net (denoted as TI -> DB), and 2) first tuning the U-Net and then the textual embedding (denoted as DB -> TI). As shown, both models fail to generate text-aligned images. While the TI -> DB setting improves performance compared to using TI or DB individually, it still suffers from overfitting issues. The DB -> TI setting performs very closely to the DB model alone. In contrast, our method successfully generates images that preserve concept identity and align with the text.
**References**
[1] Yuval Alaluf et. al. "A Neural Space-Time Representation for Text-to-Image Personalization". SIGGRAPH Asia, 2023. | Summary: The submission proposes AttnDreamBooth for text-to-image personalization. It addresses the limitations of existing methods, Textual Inversion and DreamBooth, by separating the learning process into three stages: embedding alignment, attention map refinement, and subject identity capture. The method aims to improve identity preservation and text alignment in generated images.
# Key Contributions:
- Introduces an approach for text-to-image personalization.
- Addresses the limitations of existing methods by separating the learning process.
- Demonstrates improved performance in terms of identity preservation and text alignment.
- Provides a comprehensive analysis of the proposed method through qualitative and quantitative evaluations.
Strengths: -The paper identifies a key challenge in text-to-image personalization and provides a novel solution.
- The proposed method, AttnDreamBooth, demonstrates superior performance in both identity preservation and text alignment compared to Textual Inversion and Dreambooth.
- The work contributes to advancing the field of text-to-image personalization by offering a more effective approach to balancing the trade-off between identity and text alignment.
Weaknesses: - My main concern is comparison with new existing work. There have been several recent works after Textual inversion and Dreambooth that have significantly advanced this area. Some of them like SuTI (NeurIPS 2023), Instruct Imagen (CVPR 2024), HyperDreambooth (CVPR 2024) are also zero-shot which do not require any fine-tuning during evaluation.
- It is critical for this work to compare with the state-of-the-art papers in this area. I have listed a few above. However, I am sure that there could be more papers in the past two years in this area.
Technical Quality: 3
Clarity: 3
Questions for Authors: - Comparison with multiple recent baselines is required. It would be great if the authors can provide this.
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: - While the proposed approach is novel, the core idea of decomposing the personalization process into multiple stages is not entirely groundbreaking. A more comprehensive exploration of the relationship between the proposed method and existing work would be beneficial.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to thank the reviewer for the constructive comments. We response the reviewer’s concerns as follows.
>**W1: My main concern is comparison with new existing work. There have been several recent works after Textual inversion and Dreambooth. Some of them like SuTI (NeurIPS 2023), Instruct Imagen (CVPR 2024), HyperDreambooth (CVPR 2024).**
We would like to clarify some aspects regarding the baseline methods described in our paper. As indicated in Figure 6 of the main paper and Figure 8 of the appendix, our comparisons are not limited to Textual Inversion and DreamBooth. We also include evaluations against other recent methods such as OFT (NeurIPS 2023) [1] and NeTI (SIGGRAPH Asia 2023) [2]. We adopted these two methods as baselines because they are among the state-of-the-art and are open-source.
Thank you for the suggestion to compare our method with SuTI [3], Instruct-Imagen [4], and HyperDreamBooth [5]. In Figure R1 of the attached PDF, we present comparisons with SuTI and Instruct-Imagen. Due to the unavailability of open-source models for these two methods, we use the examples provided in their papers for comparison. As shown, our model achieves superior performance in text alignment compared to these methods. For instance, in the example of "[V] fancy boot with silver-tipped toes kicking a football", our model modifies only the toes to be silver, whereas SuTI modifies the entire boot. Regarding HyperDreamBooth, it focuses on personalizing human faces, whereas our approach targets general objects. Personalization of human faces is beyond the scope of our paper. We will include these comparisons in our revised paper.
>**W2 and Q1: It is critical for this work to compare with the state-of-the-art papers in this area. I have listed a few above. However, I am sure that there could be more papers in the past two years in this area.**
Thanks for the suggestion. As illustrated in the response to point W1, our current comparison includes two recent state-of-the-art methods: OFT (NeurIPS 2023) [1] and NeTI (SIGGRAPH Asia 2023) [2]. In addition to the above comparison to SuTI and Instruct-Imagen, we further include comparisons with two other open-source models, DreamMatcher (CVPR 2024) [6] and FreeCustom (CVPR 2024) [7], in Figure R2 of the attached PDF. As can be observed, our method achieves superior performance in both identity preservation and text alignment compared to these methods. We will add these comparisons to the revised paper.
>**Limitation: While the proposed approach is novel, the core idea of decomposing the personalization process into multiple stages is not entirely groundbreaking. A more comprehensive exploration of the relationship between the proposed method and existing work would be beneficial.**
The relationship between our method and existing multi-stage methods is discussed in the 'Multi-Stage Personalization' section of Related Work (i.e., Lines 95 - 108). Our method differs from existing methods in several aspects. Firstly, the motivation for our first stage (i.e., optimizing the textual embedding) differs from existing methods, where we focus on learning the embedding alignment while mitigating the risk of overfitting. Consequently, we significantly reduce the optimization steps to just 60 and lower the learning rate. Secondly, we decompose the learning process into three stages: learning the embedding alignment, refining the attention map, and capturing the subject identity. Thirdly, we utilize the super-category token to guide the attention map learning in a self-supervised manner throughout all training stages. We will include more multi-stage methods, such as MagiCapture [8], for comparison, making a more comprehensive discussion about multi-stage personalization methods.
**References**
[1] Zeju Qiu et. al. "Controlling Text-to-Image Diffusion by Orthogonal Finetuning". NeurIPS, 2023.
[2] Yuval Alaluf et. al. "A Neural Space-Time Representation for Text-to-Image Personalization". SIGGRAPH Asia, 2023.
[3] Wenhu Chen et. al. "Subject-driven Text-to-Image Generation via Apprenticeship Learning". NeurIPS, 2023.
[4] Hexiang Hu et. al. "Instruct-Imagen: Image Generation with Multi-modal Instruction". CVPR, 2024.
[5] Nataniel Ruiz et. al. "HyperDreamBooth: HyperNetworks for Fast Personalization of Text-to-Image Models". CVPR, 2024.
[6] Jisu Nam et. al. "DreamMatcher: Appearance Matching Self-Attention for Semantically-Consistent Text-to-Image Personalization". CVPR, 2024.
[7] Ganggui Ding et. al. "FreeCustom: Tuning-Free Customized Image Generation for Multi-Concept Composition". CVPR, 2024.
[8] Junha Hyung et. al. "Magicapture: High-resolution multi-concept portrait customization". AAAI, 2024. | Rebuttal 1:
Rebuttal: We would like to thank the reviewers for their constructive and thoughtful feedback. We are encouraged that the reviewers find our idea novel (ZpWY) and interesting (hyCc), and our method reasonable and clear (U1BU). We are pleased that they consider our results to be good (U1BU) and competitive compared to state-of-the-art approaches (hyCc). We appreciate their recognition of our paper as well-written and easy to understand (hyCc, FYJx). Moreover, ZpWY, U1BU, and FYJx recognize that our approach addresses a key problem in text-to-image personalization, while ZpWY confirms that our approach advances the field.
**[Response to a Common Concern]**
Below, we address a common concern regarding the training time of our model. Point-to-point responses are included as a reply to each reviewer.
**1. Concern regarding the training time of our model**
Indeed, computational overhead is a limitation of our method. To address this, we explored a simple yet effective strategy to reduce the training time. This involves increasing the learning rate while simultaneously decreasing both the training steps and the batch size for our third training stage, which is notably the most time-consuming phase. Specifically, the third stage of our original model performs 500 steps with a learning rate of 2e-6 and a batch size of 8. The fast version now completes training in just 200 steps with a learning rate of 1e-5 and a batch size of 4. This adjustment significantly reduces the training time from 20 minutes to 6 minutes on average. Interestingly, this fast version maintains performance comparable to our original model, likely because the first two stages provide a convenient starting point, allowing for a higher learning rate in the third stage. The qualitative evaluation is provided in Figure R4 of the attached PDF, and quantitative results are detailed in the table below. We observed that the fast version model performs very closely to the original model for short prompts (e.g., it even slightly outperforms the original model in the quantitative evaluation), but it slightly underperforms for complex prompts (e.g., the second and fourth examples in Figure R4).
| Methods | Identity Preservation | Text Alignment | Training Time |
| :---------------- | :------: | :----: | :----: |
| NeTI [1] | 0.6901 | 0.2522 | 13 minutes |
| Ours-fast | 0.7268 | 0.2536 | 6 minutes |
| Ours-original | 0.7257 | 0.2532 | 20 minutes |
We thank the reviewers for this point, and will definitely include these results of this fast version model in the revised paper!
**[Additional Experimental Results]**
We summarize our additional experimental results below. Please refer to the attached PDF file for the figures. In the following, please note that Figure R* denotes the figure in the attached PDF.
1. **Comparison to SuTI [1] and Instruct-Imagen [2].** In Figure R1, we present a visual comparison with SuTI and Instruct-Imagen. Due to the unavailability of open-source models for these two methods, we compare with the examples provided in their papers.
2. **Comparison to DreamMatcher [3] and FreeCustom [4].** In Figure R2, we present a visual comparison with DreamMatcher and FreeCustom.
3. **Fast version of our model.** In Figure R4, we provide the results of our fast version model, which significantly reduces the training time from 20 minutes to 6 minutes.
4. **Quantitative ablation study.** The results of the quantitative ablation study are detailed in the response to reviewer U1BU's point W2.
5. **DreamBooth with more training steps.** In Figure R3, we provide the results of training DreamBooth with more training steps.
6. **Different settings of TI+DB.** In Figure R5, we present a visual comparison with two different settings for TI+DB: 1) first tuning the textual embedding and then the U-Net, and 2) first tuning the U-Net and then the textual embedding.
7. **Comparison to using TI or DB individually.** In Figure R5, we present a visual comparison to using TI or DB individually.
8. **Our model using SD1.5.** In Figure R6, we present the results of our model using SD1.5.
9. **Our model with the prior preservation loss.** In Figure R7, we present a visual comparison of our models with or without the prior preservation loss.
All the above additional experimental results will be added to the main text or appendix of the revised paper.
**References**
[1] Wenhu Chen et. al. "Subject-driven Text-to-Image Generation via Apprenticeship Learning". NeurIPS, 2023.
[2] Hexiang Hu et. al. "Instruct-Imagen: Image Generation with Multi-modal Instruction". CVPR, 2024.
[3] Jisu Nam et. al. "DreamMatcher: Appearance Matching Self-Attention for Semantically-Consistent Text-to-Image Personalization". CVPR, 2024.
[4] Ganggui Ding et. al. "FreeCustom: Tuning-Free Customized Image Generation for Multi-Concept Composition". CVPR, 2024.
Pdf: /pdf/f8e16aca73aea62ba7d52dc778e22828ee3ac688.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
A Near-optimal Algorithm for Learning Margin Halfspaces with Massart Noise | Accept (spotlight) | Summary: The paper considers the problem of PAC learning halfspaces with margin in the presence of Massart noise. The paper provides an algorithm that well-balances sample and computational efficiency. Specifically, the dependence of the algorithm on both $\epsilon$ and $\gamma$ is near-optimal.
Strengths: 1. The studied problem is fundamental in the area of PAC learning, and the paper provides a significant progress on it.
2. The paper is well written, and the main techniques are well explained.
Weaknesses: 1. Results might be somewhat weaker than presented (see questions), and some phrasings in this context are too vague (for example "there is evidence that...").
2. The natural agnostic extension of the problem is not discussed.
Technical Quality: 3
Clarity: 3
Questions for Authors: Questions:
1. I'm not sure about the statement "...we essentially settle this question..." in line 71. As far as I understand, the optimal computational sample complexity might be $\tilde{O}(1/\epsilon^2 + 1/\epsilon \gamma^2)$ and not $\tilde{\Omega}(1/\epsilon^2 \gamma^2)$. Either way, the provided upper bound is impressive enough to recommend for acceptance.
2. Is there a specific reason not to discuss the agnostic case? Is it usualy considered in Massart noise problems?
Suggestions:
Consider writing what is $\eta$ in the abstract.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the time and effort in reading our paper and the positive assessment. We respond to each point raised by the reviewer below.
>(Weakness 1): Results might be somewhat weaker than presented (see questions), and some phrasings in this context are too vague (for example "there is evidence that...").
*Response:* We thank the reviewer for this feedback. We will make our statements of prior information-computation tradeoffs precise. In the submitted version, we did not discuss these results in detail due to space limitations. Please see response below for more details.
>(Weakness 2): The natural agnostic extension of the problem is not discussed.
*Response:* For distribution-free agnostic PAC learning, the learning problem we study is known to be computationally intractable (even for weak learning). Specifically, it is is NP-Hard [1] for proper learning and cryptographically/SQ-hard for improper learning [2,3]. These hardness results have historically been one of the motivations for studying the Massart model.
>(Question 1): I'm not sure about the statement "...we essentially settle this question..." in line 71. As far as I understand, the optimal computational sample complexity might be $\tilde{O}(1/\epsilon^2 + 1/\epsilon \gamma^2)$ and not $\tilde{\Omega}(1/\epsilon^2 \gamma^2)$. Either way, the provided upper bound is impressive enough to recommend for acceptance.
*Response:* As the reviewer points out, the best known lower bound on the computational sample complexity of the problem is $\tilde{\Omega}(1/(\gamma\epsilon^2) + 1/(\epsilon \gamma^2))$. This lower bound is quadratic in both parameters of interest but does not quite match our upper bound. While we believe that a lower bound of order $\tilde{\Omega}(1/(\gamma^2\epsilon^2))$ exists, this remains an open problem. Finally, we note that even for the easier model of Random Classification Noise (RCN), the best known efficient algorithm has sample complexity $\tilde{O}(1/(\gamma^2\epsilon^2))$ (established recently in [DDK+23a]).
>(Question 2): Is there a specific reason not to discuss the agnostic case? Is it usually considered in Massart noise problems?
*Response:* See Response to Weakness 2 above.
> Suggestions: Consider writing what is $\eta$ in the abstract.
*Response:* Thank you. We will add this in the final version.
[1] Hardness of Learning Halfspaces with Noise, Venkatesan Guruswami, Prasad Raghavendra, FOCS 2006
[2] Complexity Theoretic Limitations on Learning Halfspaces, Amit Daniely, STOC 2016
[3] Hardness of agnostically learning halfspaces from worst-case lattice problems, Stefan Tiegel, COLT 2023
---
Rebuttal Comment 1.1:
Comment: Thank you for addressing my comments and questions. | Summary: The paper considers the problem of PAC-learning $\gamma$-margin halfspaces under $\eta$-Massart noise. The paper provides an efficient algorithm achieving error $\eta+\epsilon$ with sample complexity $\tilde{O}(1/(\epsilon^2\gamma^2))$. Since previous work provided evidence that an inverse-quadratic dependence on $\epsilon$ is necessary for efficient algorithms, the algorithm appears to be near-optimal in terms of sample-complexity up to logarithmic factors.
The algorithm relies on an iterative stochastic-gradient descent approach, where at each iteration a new loss function is defined which determines the gradient descent of the next iteration. The paper proves that if $T$ iterations are performed (where $T$ is a sufficiently large multiple of $\log(1/\delta)/(\epsilon^2\gamma^2)$), then with probability at least $1-\delta$, one of the halfspaces obtained in $T$ iterations have error at most $\eta+\epsilon$. By drawing some extra independent samples and computing the empirical error for each of these halfspaces, one can find a good halfspace.
The paper is generally well written.
Strengths: The paper provides a near-optimal algorithm (in terms of sample complexity) for learning $\gamma$-margin halfspaces under Massart noise.
Weaknesses: I haven't found significant weaknesses in the paper.
Typos:
- Page 2, line 34: “We say that that the distribution” -> “We say that the distribution”
- Page 5: Line 195: If I am not mistaken, the function g is twice the gradient.
Technical Quality: 3
Clarity: 3
Questions for Authors: - The authors claim that the computational complexity of the algorithm is linear in the samples. However, it seems to me that the complexity of step (5) is $O(T\cdot N\cdot d)$. It does not seem to be trivial to implement step (5) in $O((T + N)\cdot d)$ time.
- While previous work show that $\Omega(1/(\gamma^2\epsilon))$ and $\Omega(1/\epsilon^2)$ are lower bounds, it doesn't necessarily follow that $\Omega(1/(\epsilon^2\gamma^2))$ is a lower bound as well.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Since this is a theoretical paper, I can't see any potential negative societal impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the time and effort in reading our paper and the positive assessment. We respond to each point raised by the reviewer below.
>(Question 1): The authors claim that the computational complexity of the algorithm is linear in the samples. However, it seems to me that the complexity of step (5) is $O(T\cdot N\cdot d)$. It does not seem to be trivial to implement step (5) in $O((T + N)\cdot d)$ time.
*Response:* Yes, the reviewer is correct. We will fix the statement about the runtime where appropriate. As the reviewer notes, the running time incurs an extra $1/\epsilon$ multiplicative term (up to logarithmic factors). That is because $N$ is set to $\tilde{O}(\log(1/(\delta\gamma))/(\epsilon (1-2\eta)))$ and we need to evaluate each of the $T$ hypotheses to return the best one.
>(Question 2): While previous work show that $\Omega(1/(\gamma^2\epsilon))$ and $\Omega(1/\epsilon^2)$ are lower bounds, it doesn't necessarily follow that $\Omega(1/(\epsilon^2\gamma^2))$ is a lower bound as well.
*Response:* We agree with the reviewer about prior work on hardness, and we will adjust our phrasing. In more detail, the best known lower bound for the computational sample complexity of this problem is $\tilde{\Omega}( 1/(\epsilon \gamma^2)+1/(\gamma\epsilon^2) )$, and applies to SQ algorithms and low-degree polynomial tests. The first term is the information-theoretic sample complexity [MN06] and the second term follows from [DDK+23a,DDK+23b].
While we have good reason to believe that these hardness results can be improved to give a lower bound of $\Omega(1/(\epsilon^2\gamma^2))$, this remains an open problem.
---
Rebuttal Comment 1.1:
Comment: I would like to thank the authors for their response. Since they addressed my minor comments, I'm increasing my score to 7. | Summary: This paper studies the problem of PAC learning $\gamma$-margin halfspaces under Massart noise: to PAC learn any distribution $D$ such that there exists $w^*$ with the bounded norm of $1$ that has a margin of at least $\gamma$, i.e. $\mathbb{P}_{(x,y)\sim D}( |\langle w^*,x \rangle| \geq \gamma)=1$ and for every $x$, we have $\mathbb{P}(\text{sign}(\langle w^*,x \rangle) \neq y )=\eta(x)$, where a function $\eta$ is bounded in $[0,\eta]$ (with overload of notation).
This is a classical problem in learning theory. Information theoretically, one needs $\tilde{\Theta}(1/\gamma^2 \epsilon)$ samples, while for computationally efficient algorithms, it is widely believed that inverse quadratic dependence in $\epsilon$ (e.g. $1/\epsilon^2$) is necessary.
The main result of this paper is to present an efficient algorithm with nearly optimal sample complexity $\tilde{O}(1/\gamma^2 \epsilon^2)$, essentially closing the question.
Strengths: 1. The paper studies a problem that is important to the learning theory community and provides a state-of-the-art result. The previous best algorithm required $O(1/\gamma^4 \epsilon^3)$ samples.
2. The paper is well-written.
Weaknesses: I do not see any major weaknesses.
Technical Quality: 4
Clarity: 4
Questions for Authors: None.
Confidence: 3
Soundness: 4
Presentation: 4
Contribution: 3
Limitations: This is a theoretical work with no societal impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the time and effort in reviewing our paper and the positive feedback. | Summary: The paper studies the problem of learning a $\\gamma$-margin halfspace with $\\eta$ Massart noise and provides the first computationally efficient algorithm having 0-1 error $<= \\eta + \\varepsilon$ with sample complexity $O(1/\\gamma^2\\varepsilon^2)$ which nearly matches the information theoretic bound of $O(1/\\gamma^2\\varepsilon^)$ improving upon the previous $O(1/\\gamma^4\\varepsilon^3)$ sample complexity by [Chen et al. ‘20].
The main contributions of the paper are a novel “global” (as opposed to previous conditioning based) optimization to solve this problem directly using gradient descent. Towards this, a novel convex loss is defined using an independent vector as a parameter, and using the margin as a bound on the scaling parameter. The gradient w.r.t. the solution vector is used in an SGD loop, while the independent vector is updated by the gradient step, and its projection on the unit ball is the updated solution. This clever formulation leads to the gradient being composed of the “correct” direction of optimization along with an estimation error. The former minimizes the distance to the true solution, while the latter is bounded using standard concentration arguments.
Strengths: 1. Novel loss formulation and gradient update step which can be directly used via SGD.
2. Simple algorithm and analysis.
3. Near optimal computational bound on an important problem.
Weaknesses: Result is specific to a particular noise model and it is not clear if the techniques are more broadly applicable.
Technical Quality: 4
Clarity: 4
Questions for Authors: In Algorithm 1, $\\lambda_t$ seems to be a constant independent of $t$, so it can be fixed outside the loop.
Confidence: 3
Soundness: 4
Presentation: 4
Contribution: 3
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the positive feedback and the provided questions. We respond to each point raised by the reviewer below.
> (Weaknesses 1):Result is specific to a particular noise model and it is not clear if the techniques are more broadly applicable.
*Response:* We would like to point out that the Massart (or bounded noise) model is essentially the strongest label noise model that allows for polynomial-time algorithms in the distribution-free PAC setting. In particular, if the label noise is fully adversarial (agnostic model), it is computationally hard to achieve any non-trivial error guarantees for the class of margin halfspaces (aka to achieve "weak learning"); see [1,2,3]. Moreover, we believe that the problem we study and the results themselves are interesting in their own right (the algorithmic complexity of the problem has been a longstanding open question; the NeurIPS'19 paper on the topic received the best paper award at that conference and has subsequently had a significant impact). That said, while our focus has been on this particular problem, we believe that the technical analysis of our algorithm could be of broader interest. Specifically, we feel that our white-box analysis of online SGD is novel and could be useful elsewhere. Moreover, the reweighting scheme that we employ may be useful in other problems, as it provides a method to *convexify* the 0-1 loss.
> (Question): In Algorithm 1, $\lambda_t$ seems to be a constant independent of $t$, so it can be fixed outside the loop.
*Response:* Thank you for pointing this out. We will move $\lambda_t$ outside the main loop.
[1] Hardness of Learning Halfspaces with Noise, Venkatesan Guruswami, Prasad Raghavendra, FOCS 2006
[2] Complexity Theoretic Limitations on Learning Halfspaces, Amit Daniely, STOC 2016
[3] Hardness of agnostically learning halfspaces from worst-case lattice problems, Stefan Tiegel, COLT 2023
---
Rebuttal Comment 1.1:
Comment: Thank you for the rebuttal and the clarification. I will keep my rating. | Rebuttal 1:
Rebuttal: We thank the reviewers for their time, effort, and feedback. We are encouraged by the positive comments of reviewers, and that our paper was appreciated for the: (i) **significant contribution** (SyXT, JiXV, 2xNf); (ii) **technically novel** (g5LR,SyXT) and (iii) **writing quality** (g5LR, JiXV, 2xNf). We respond to specific comments below. | NeurIPS_2024_submissions_huggingface | 2,024 | Summary: This paper essentially resolves the *computational sample complexity* of learning $\gamma$-margin halfspaces under the Massart noise model. Here, computational sample complexity refers to the number of samples required for polynomial-time algorithms, as opposed to general statistical estimators which may be computationally intractable. Previous works have given rigorous evidence, in the form of lower bounds against large classes of algorithms such as SQ algorithms, of an *information-computation* gap for learning $\gamma$-margin halfspaces. Without restrictions on computation, the sample complexity is $\Theta(1/(\gamma^2 \epsilon))$ for achieving zero-one loss of $\eta + \epsilon$, where $\eta$ is the uniform bound on the Massart noise. For a large class of efficient algorithms, however, the required sample complexity is at least $\Omega(1/(\gamma^2 \epsilon^2))$.
They show that the $1/(\gamma^2 \epsilon^2)$ sample complexity is essentially tight by designing a polynomial-time algorithm for learning $\gamma$-margin halfspaces. Their algorithm is based on a novel and technically insightful choice of the loss sequence for online SGD, which results in a surprisingly simple and efficient algorithm with near-optimal sample complexity.
Strengths: This is a well-written paper that stands out for being both technically insightful and simple. A simple and efficient algorithm (online SGD with a specific choice of loss sequence) achieves optimal sample complexity and lends itself to a clean analysis. It's hard to ask for more than this.
The core idea behind the algorithm is a clever choice of the loss sequence $(\ell_t)$ with respect to which one runs online SGD. Previous works have already employed the LeakyReLU loss as a convex surrogate for the zero-one loss and achieved suboptimal upper bounds on the sample complexity. LeakyReLU with parameter $\lambda > 0$ is defined by $\ell_\lambda(a) = (1-\lambda) \mathbb{1}[a \ge 0] + \lambda \mathbb{1}[a < 0]$. Applying this to the halfspace learning setting, a straightforward calculation shows that $\ell_\lambda(-y (w \cdot x)) = (\mathbb{1}[\mathrm{sign}(w\cdot x) \neq y] - \lambda)|w \cdot x|$, where $(x, y) \in \mathbb{S}^{d-1} \times \\{\pm 1\\}$ is a sample from the $\gamma$-margin halfspace distribution and $w \in \mathbb{R}^d$ is a candidate halfspace. Note the resemblance to the *shifted* zero-one loss $L(w) = \mathbb{E}_{(x,y)} \mathbb{1}[\mathrm{sign}(w\cdot x) \neq y] - \eta$ (when $\lambda = \eta$ in LeakyReLU). By the $\eta$-Massart noise assumption, $L(w^{\*}) \le 0$ for the optimal halfspace $w^{\*} \in \mathbb{R}^d$ and $L(w) \ge \epsilon$ for any halfspace $w$ with zero-one loss at least $\eta + \epsilon$.
The key difference between this shifted zero-one loss and the LeakyReLU is the $|w \cdot x|$ term. In particular, if we reweight each sample LeakyReLU loss by $1/|w \cdot x|$, we recover the shifted zero-one loss which is unfortunately non-convex. The authors overcome this issue by considering a *family* of bounded and convex loss functions $(\ell_u)$ indexed by $u \in \mathbb{R}^{d}$. Each $\ell_u$ is simply the LeakyReLU loss reweighted by $1/|u \cdot x|$. It is precisely this decoupling of the halfspace parameter $w$ and the reweighting parameter $u$ that leads to the guarantees of the algorithm. The *sequence* of reweighting parameters $(u_t)$, each of which leads to a different loss $\ell_t$, is chosen adaptively by online SGD.
Weaknesses: I did not find any significant weaknesses, only minor comments regarding the presentation.
- **The expression "(vector) v is independent of w" (line 174, 176) is confusing.** I think it's easy to confuse "independence" with statistical independence. It would be helpful to clarify this by adding that the reweighting term is *constant* with respect to the parameter $w \in \mathbb{R}^d$, and the reweighting term $W(v \cdot x, \gamma)$ remains the same when taking the gradient of $L_{v}(w)$. This is already implicit in the mathematical expressions, but providing additional explanation would benefit readers.
Technical Quality: 4
Clarity: 4
Questions for Authors: - Does the near-optimal online SGD algorithm fall within the class of SQ algorithms? If not, could there still exist non-SQ algorithms that achieve sample complexity with subquadratic dependence on $1/\epsilon$?
Confidence: 3
Soundness: 4
Presentation: 4
Contribution: 3
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the time and effort in reviewing our paper and for the positive feedback. Below we provide specific responses to the points and questions raised by the reviewer.
> (Weaknesses 1): The expression "(vector) v is independent of w" (line 174, 176) is confusing. I think it's easy to confuse "independence" with statistical independence. It would be helpful to clarify this by adding that the reweighting term is constant with respect to the parameter $w \in \mathbb{R}^d$, and the reweighting term $W(v \cdot x, \gamma)$ remains the same when taking the gradient of $L_{v}(w)$. This is already implicit in the mathematical expressions, but providing additional explanation would benefit readers.
*Response:* We thank the reviewer for this suggestion. We will clarify this point in the final version of the paper.
> (Question 1) Does the near-optimal online SGD algorithm fall within the class of SQ algorithms? If not, could there still exist non-SQ algorithms that achieve sample complexity with subquadratic dependence on $1/\epsilon$?
*Response:* The online GD algorithm (using a batch size) can indeed be efficiently implemented as an SQ algorithm. As we explain below, our algorithm can be formulated in this way. Therefore, the previously known SQ lower bound covers the algorithm developed in our paper. In more detail, this can be seen as follows: in each iteration of our algorithm, the update rule calculates the gradient -- which is of the form $g=\mathbf{E}[f(x,y)]$ -- where $f$ can be viewed as the query function in the SQ model.
We also note that prior work established the same information-computation tradeoff for the class of low-degree polynomial tests (in addition to SQ algorithms). Historically, lower bounds against SQ algorithms and low-degree polynomials have been viewed as strong evidence of hardness. That said, these are restricted models of computation and it is in principle possible that an algorithm can surpass them.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response! | null | null | null | null | null | null |
Chain-of-Thought Reasoning Without Prompting | Accept (poster) | Summary: The paper investigates the inherent capabilities of LLMs to generate CoT reasoning paths. This study introduces CoT-decoding, a method that explores alternative top-k tokens in the decoding space to uncover reasoning paths without any specialized prompting. The findings indicate that the presence of a CoT reasoning path correlates with increased model confidence in its final answer. The paper highlights that CoT-decoding can enhance the reasoning performance of language models by extracting more reliable decoding paths.
Strengths: 1. The paper introduces CoT-decoding to elicit reasoning paths and provides an effective alternative to prompt engineering.
2. The results demonstrate significant improvements over greedy decoding, showcasing the practical benefits of the proposed method.
3. The study finds a clear correlation between the presence of CoT paths and increased model confidence, providing a useful metric for identifying reliable decoding paths.
Weaknesses: 1. The primary contribution of the paper appears to be a method for selecting a decoding path from the top-k generated paths. While this approach is useful, it lacks novelty and significant impact.
2. The motivation behind the study is unclear to me for several reasons. Firstly, providing questions alone constitutes a form of prompting instead of no prompting. Secondly, the use of a prompt that inherently challenges the elicitation of CoT reasoning paths via greedy decoding, only to then introduce a specialized decoding method to address this, strikes me as contrived. Lastly, I question the claim that CoT-decoding effectively enhances the reasoning capabilities of language models, as it merely selects from pre-generated paths rather than uncovering new, substantive reasoning processes.
3. The proposed method may face challenges in tasks where identifying correct answer spans is difficult, limiting its practical applicability.
4. The comparison between CoT-decoding and greedy decoding is unfair. CoT-decoding inherently requires multiple reasoning paths to be generated first, which is not a requirement for greedy decoding. The exploration of multiple decoding paths significantly increases the computational cost, which could be a drawback for practical applications.
5. The evidence provided in the paper is restricted to a few reasoning benchmarks and toy tasks. To convincingly demonstrate its utility, the method should be tested across a broader array of datasets, such as the MATH benchmark, and in more complex real-world scenarios.
6. The range of models evaluated in the study is too narrow. Including mainstream models like GPT-3.5, GPT-4, and diverse open-source models like llama3 in various sizes would provide a more robust evaluation of the method's effectiveness.
Technical Quality: 2
Clarity: 3
Questions for Authors: NA
Confidence: 4
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your constructive feedback!
> The primary contribution of the paper appears to be a method for selecting a decoding path from the top-k generated paths. While this approach is useful, it lacks novelty and significant impact.
To clarify our contribution, our first finding is that LLMs inherently possess reasoning capabilities even without any additional prompting, this is a *novel* finding in contrast to many previous papers proposing better prompting to enable LLM reasoning. Our other contribution is to propose CoT-decoding that identifies the correct CoT-path, this is also the *first decoding-only method* that effectively enables language models to reason, please see Table 4 for a detailed comparison to all existing decoding algorithms.
> The motivation behind the study is unclear to me for several reasons. Firstly, providing questions alone constitutes a form of prompting instead of no prompting. Secondly, the use of a prompt that inherently challenges the elicitation of CoT reasoning paths via greedy decoding, only to then introduce a specialized decoding method to address this, strikes me as contrived. Lastly, I question the claim that CoT-decoding effectively enhances the reasoning capabilities of language models, as it merely selects from pre-generated paths rather than uncovering new, substantive reasoning processes.
Thanks for the suggestion, we will add more clarification to the motivation as below.
First, to clarify the “prompting” part: the question has to be an input to the model, otherwise the model will have no input :) In existing literature, the “prompting” part usually refers to any additional prompts added to the question, e.g., zero-shot CoT uses “let’s think step by step” and few-shot CoT uses few-shot demonstrations before the question. We will make this point more clear in the paper.
Second, your comment is similar to what existing papers try to claim: LLMs *can’t reason* with questions only. Our study is the first to show that it is *not the case*: the observation that models can’t reason without additional prompting is due to the prevalent usage of greedy decoding, while the top-k alternative decoding paths can unveil the existence of CoTs.
To your last point, yes as we emphasized in multiple places in our paper, our method “enables a better understanding of LLMs’ intrinsic reasoning capabilities” (line 60). Our primary finding is that LLMs can already reason by themselves after pre-training, and “the reasoning process can be readily elicited by simple decoding changes” (line 56).
We say that the model’s reasoning performance is enhanced by CoT-decoding *when compared to greedy decoding*, and we do not claim that we enhance the inherent reasoning capabilities of LLMs, we just made those capabilities more prominent. We will make this point more clear in our paper.
> The proposed method may face challenges in tasks where identifying correct answer spans is difficult, limiting its practical applicability.
Yes, as we discussed in our limitation section, we hope this can be better addressed in future research by better learning the model’s internal representation across a broader, more open-ended answer space.
> The comparison between CoT-decoding and greedy decoding is unfair. CoT-decoding inherently requires multiple reasoning paths to be generated first, which is not a requirement for greedy decoding. The exploration of multiple decoding paths significantly increases the computational cost, which could be a drawback for practical applications.
Thanks for your question. We explore more decoding paths because we want to investigate whether the model inherently possesses the reasoning capability or not, which is currently masked due to the prevalent usage of greedy decoding. And yes as discussed in our limitation section, our method will incur more computational cost, and we can use the CoT paths found by our method to further train the model, such that those reasoning paths can be readily output during inference.
> The evidence provided in the paper is restricted to a few reasoning benchmarks and toy tasks. To convincingly demonstrate its utility, the method should be tested across a broader array of datasets, such as the MATH benchmark, and in more complex real-world scenarios.
The range of models evaluated in the study is too narrow. Including mainstream models like GPT-3.5, GPT-4, and diverse open-source models like llama3 in various sizes would provide a more robust evaluation of the method's effectiveness.
Thanks for the suggestion. We include results on Llama-3 below, showing the robustness of our proposed approach. We also add the MATH benchmark (the MATH-500 held-out set from https://arxiv.org/pdf/2305.20050) to cover a broader range of datasets.
For GPT models, note that the focus of our paper is on *pre-trained* models to investigate their intrinsic reasoning capabilities, while current exposed GPT APIs are all instruction fine-tuned models, hence it is hard to distinguish whether the model can reason already after pre-training, or acquired such ability during instruction fine-tuning via a substantial amount of CoT reasoning data. Also note that in Figure 3, we already plotted the performance of CoT-decoding on models with *various sizes* (XS, Small, Medium, Large) and showed performance improvement across the board.
Results on Llama-3-8B pre-trained model:
| | GSM8K | MultiArith | MATH |
|---| --- | --- | --- |
| greedy decoding | 21.4% | 41.8% | 14.0% |
| CoT-decoding (max) | 32.7% | 50.7% | 19.8% |
| CoT-decoding (agg) | 47.9% | 77.8% | 22.6% | | Summary: The paper explores the inherent reasoning capabilities of large language models (LLMs) without the need for explicit prompting. By altering the decoding process to consider top-k alternative tokens, the authors reveal that Chain-of-Thought (CoT) reasoning paths can emerge naturally. This approach bypasses the need for task-specific prompt engineering and demonstrates that LLMs possess intrinsic reasoning abilities. Extensive empirical evaluations across various reasoning benchmarks show significant performance improvements using CoT-decoding compared to conventional greedy decoding.
Strengths: - The paper introduces a novel approach to reveal LLMs' reasoning capabilities without explicit prompts by altering the decoding process.
- The paper is clearly written, with detailed explanations and illustrations of the CoT-decoding process.
- The findings challenge the prevailing notion that LLMs require specific prompting for reasoning, highlighting the models' intrinsic abilities.
Weaknesses: - The approach, while novel in its specific application, builds on existing concepts of decoding and model confidence. The novelty is somewhat incremental.
- The scope of experiments could be broadened to include more diverse and complex reasoning tasks. Additional benchmarks and comparisons with state-of-the-art prompting methods would strengthen the paper.
- The paper could benefit from a more detailed comparative analysis with other decoding and prompting strategies to contextualize its contributions better.
Technical Quality: 3
Clarity: 3
Questions for Authors: - Can the authors provide more detailed comparisons with other state-of-the-art prompting and decoding methods?
- How does the CoT-decoding method perform on more complex and diverse reasoning tasks not covered in the current experiments?
- Are there any potential limitations or biases introduced by relying on top-k alternative tokens for decoding?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: The authors have addressed some limitations of their work, such as the additional computational costs incurred by exploring alternative decoding paths. However, the paper could further discuss the potential biases introduced by this approach and its applicability to more complex, open-ended tasks.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the feedback!
> Can the authors provide more detailed comparisons with other state-of-the-art prompting and decoding methods? The paper could benefit from a more detailed comparative analysis with other decoding and prompting strategies to contextualize its contributions better.
- Comparison to other decoding methods: Please see Table 4 for the detailed comparison on all popular decoding methods used by SoTA LLMs, including greedy, top-k, temperature sampling, nucleus sampling, beam search, and SoTA decoding methods like self-consistency (no prompt), and we show that CoT-decoding is the **only decoding algorithm** that significantly enhances reasoning in language models.
- Comparison to other prompting methods: please see Table 7 for the results with CoT and self-consistency (with CoT prompt), where both are standard approaches used in major LLM reports to achieve SoTA.
Note that CoT-decoding is a decoding algorithm which is **orthogonal** to existing prompting techniques, and (1) CoT-decoding can be easily combined with existing prompting techniques to yield further gains (see Table 7 for the results); (2) our paper aims to show that LLMs inherently possess reasoning abilities *without prompting*, hence adding more sophisticated prompting techniques is not the primary focus of this paper.
- To contextualize our contributions better: existing literature proposing more complex prompting methods often requires adding task-specific human-prior and performing manually-intensive prompt-engineering. As a result, those methods may achieve better task-specific improvements but could be hard to scale or transfer poorly across tasks or models. In contrast, our method requires *no prompt-engineering*, *no human intervention*, is completely *task and model-agnostic* and improves reasoning across the board.
> How does the CoT-decoding method perform on more complex and diverse reasoning tasks not covered in the current experiments?
Our current experiments covering math, commonsense, and symbolic reasoning (with various difficulty levels) showed a *consistent trend*: CoT-decoding can effectively uncover a model's intrinsic reasoning capabilities previously obscured by greedy decoding, although the model's own reasoning ability varies depending on the task difficulty level, which can be attributed to the task prominence in its pre-training distribution.
For example, in Figure 4, we show that when task difficulty increases or becomes more synthetic, it becomes harder to find the correct CoT paths (like tasks that require > 3 steps of accurate state tracking). Hence for a new task, the existence of CoT paths will depend on how prominent similar data exists in pre-training, and if relevant knowledge can be retrieved and the solution can be found within a few steps of knowledge manipulation.
Below we also add results of the Llama-3-8B pre-trained model on the MATH benchmark, which is a substantially more diverse and complex reasoning dataset. We can see that CoT-decoding can still yield effective improvements over greedy decoding.
| | MATH accuracy |
| -------- | ------- |
| greedy decoding | 14.0% |
| CoT-decoding (max) | 19.8% |
| CoT-decoding (agg) | 22.6% |
> Are there any potential limitations or biases introduced by relying on top-k alternative tokens for decoding?
Thanks for the question. We rely on the model’s own top-k token ranking because we want to better understand the model's intrinsic reasoning capabilities. As our study shows in Section 3.2 and Figure 4, our analysis unveils the model's intrinsic vulnerabilities in reasoning, e.g., on Big-bench tasks with controlled-difficulty levels, the model's ranking accuracy of top-k tokens becomes lower as the task complexity increases.
Hence for highly difficult tasks, if the model is not well-trained, it’s possible that none of the top-k tokens yield relevant paths. This in turn can help us identify a model's areas of weakness though, so we know that we need additional data coverage in model training, or we need to explicitly guide the model via expert knowledge during inference to solve those tasks.
> The approach, while novel in its specific application, builds on existing concepts of decoding and model confidence. The novelty is somewhat incremental.
Note that the detail of CoT-decoding differs substantially from existing top-k decoding or model confidence estimation, hence CoT-decoding is novel in its design and effectiveness in significantly improving LLM reasoning:
- Existing top-k decoding is a sampling algorithm, where each token is *sampled* to enhance the diversity in the *overall sequence*. While in CoT-decoding, other than the first token, all remaining tokens are greedily generated. The reason is that we want to encourage diversity in the 1st token such that the model can escape from the local optima, while for the rest of the tokens we want the model to follow the optimal reasoning path hence they’re still greedily generated. This is also different from beam search because beam search uses the model’s sequence probability to rank the responses. In contrast, in CoT-decoding we observe that the model's sequence probability is not reliable for reasoning tasks (Table 2), while the model's final answer confidence score proves to be significantly more accurate in identifying the correct CoT paths.
- Model confidence estimation: note that our confidence estimation is based on our novel observation that responses with a CoT first typically have a more confidently-decoded final answer. This is also very different from existing works that try to estimate the model’s confidence for the whole sequence to identify scenarios where the model is uncertain.
---
Rebuttal Comment 1.1:
Comment: Thanks for your rebuttal, I will keep my positive ratings.
---
Reply to Comment 1.1.1:
Comment: Thank you for acknowledging our rebuttal. We will add these additional discussion points to the paper. | Summary: The paper investigates an innovative approach to eliciting chain-of-thought (CoT) reasoning from pre-trained large language models (LLMs) without the need for explicit prompting techniques, which typically require intensive prompt engineering and can obscure the model's inherent reasoning capabilities. Instead, this study proposes altering the decoding process by exploring alternative top-k tokens, rather than the conventional top-1 greedy decoding path. This method, termed CoT-decoding, effectively reveals natural CoT reasoning paths within the language model's outputs, leading to more accurate and confident model responses across various reasoning tasks.
Contributions:
1. **Novel Decoding Strategy:** The paper introduces a novel decoding strategy that bypasses the need for prompt engineering by modifying the decoding process to explore alternative token paths, allowing the model to display its intrinsic reasoning capabilities.
2. **Empirical Validation:** Extensive empirical studies demonstrate that this CoT-decoding approach not only enhances the accuracy of LLMs on reasoning tasks but also increases the confidence of the models in their answers, suggesting a higher reliability of the reasoning paths discovered.
3. **Comparative Analysis:** The paper provides a comparative analysis of CoT-decoding against traditional methods, showing significant improvements over greedy decoding and other baseline methods across multiple benchmarks for mathematical and commonsense reasoning tasks.
Strengths: 1. The paper introduces a novel approach to eliciting chain-of-thought (CoT) reasoning from pre-trained large language models (LLMs) without requiring explicit prompting. This method diverges from conventional prompting techniques, which involve either few-shot or zero-shot prompting, by modifying the decoding process to explore alternative token paths. This strategy is both innovative and creative as it challenges the standard practice of prompt engineering and demonstrates that CoT reasoning can be naturally elicited through decoding adjustments.
2. The paper is supported by extensive empirical studies that validate the efficacy of the CoT-decoding method. The experiments are well-designed, covering a broad range of reasoning tasks, including mathematical and commonsense reasoning benchmarks.
3. The paper is exceptionally clear and well-organized. The authors provide a detailed description of the CoT-decoding process, accompanied by illustrative figures and examples that help clarify how the method diverges from traditional decoding strategies.
Weaknesses: 1. The method involves generating multiple decoding paths and evaluating them to identify the most confident reasoning trajectory, which could be computationally expensive, especially when applied at scale or in real-time applications.
2. The approach heavily relies on the selection of top-k alternative tokens, which may not always yield the most relevant or coherent paths for reasoning, especially in more complex or nuanced tasks.
Technical Quality: 3
Clarity: 3
Questions for Authors: See weekness.
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: None. Good Paper~
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the thoughtful review!
> The method involves generating multiple decoding paths and evaluating them to identify the most confident reasoning trajectory, which could be computationally expensive, especially when applied at scale or in real-time applications.
Thanks for the feedback. Yes as we discussed in the limitation section, our method does incur higher computational cost when we try to identify those CoT-paths. For real-time applications, we can incorporate the “good paths” found by our CoT-decoding algorithm into model training, such that during inference the model can directly output those good paths as the top-paths.
For the study in this paper, we spent higher compute to get the top-k paths mainly to investigate 1) whether CoT-paths exist in the top-k paths, and 2) how large k needs to be for us to find a CoT-path. Figure 4 shows that for many tasks, simply increasing k>1 can already help uncover many of the previously-hidden CoT paths.
> The approach heavily relies on the selection of top-k alternative tokens, which may not always yield the most relevant or coherent paths for reasoning, especially in more complex or nuanced tasks.
Thanks for this question. Yes, we rely on the model's own probability of ranking the top-k tokens during the first decoding step, and we do observe that for highly difficult tasks, if the model is not well-trained, it’s possible that none of the top-k tokens yields relevant paths. This in turn can help us identify a model's areas of weakness though, so we know that we need additional data coverage in model training, or we need to explicitly guide the model via expert knowledge during inference to solve those tasks. We will add more discussion on this point to make it more clear.
---
Rebuttal Comment 1.1:
Title: Official Comment by Reviewer BrnW
Comment: Thank you for your thorough rebuttal and for addressing the concerns raised. I believe this work is very meaningful and provides valuable insights into enhancing the model's reasoning capabilities and constructing higher-quality datasets. In light of your detailed response and the efforts made to address these complexities, I have decided to increase my score from 7 to 8.
---
Reply to Comment 1.1.1:
Comment: Thank you for carefully reading through our rebuttal and raising your score. We will add these additional discussion points into our paper. We truly appreciate your time in reviewing this work. | Summary: The paper investigates the intrinsic reasoning capabilities of LLMs without relying on prompting techniques like few-shot or zero-shot prompting. The study introduces an alternative approach by altering the decoding process, specifically by exploring the top-k alternative tokens rather than following the standard greedy decoding path. The proposed "CoT-decoding," reveals that LLMs inherently possess chain-of-thought reasoning abilities that are often hidden by human-involved prompting / conventional decoding methods.
Strengths: - The paper demonstrates that LLMs can generate CoT reasoning paths without explicit prompts by modifying the decoding process. This challenges the prevailing notion that LLMs require prompting to reason effectively. It shows that LLMs have inherent reasoning capabilities that are more accurately assessed by CoT-decoding.
- Experimental results show that CoT-decoding naturally reveals CoT reasoning paths during the decoding process, significantly enhancing the model's reasoning capabilities and surpassing greedy decoding. It is also observed that these paths are more prevalent in tasks frequently encountered in pre-training data.
- The paper is generally well-written and easy to follow.
Weaknesses: - **Extra Computational Cost**: Exploring top-k alternative tokens for decoding paths requires additional computational resources. Compared to CoT/Zero-shot CoT/ComplexCoT/CoT-SC(n) where n is not a large number, CoT-decoding necessitates evaluating multiple decoding paths, which increases computational complexity and time cost, especially when dealing with complex multi-step tasks.
- **Limitations in Open-ended Questions**: This method mainly relies on the model's confidence in specific answers during the decoding process to select reasoning paths. However, for more open-ended questions, this probability difference-based selection method may not be precise enough, making it challenging for LLMs to select the optimal path in a much larger answer space.
Technical Quality: 3
Clarity: 3
Questions for Authors: Does the limitation of branching only at early decoding stages restrict the flexibility and applicability of CoT-decoding? Would exploring branching at later stages in the decoding process improve overall performance?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: I do not identify any negative societal impacts in this paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your thoughtful review!
> Does the limitation of branching only at early decoding stages restrict the flexibility and applicability of CoT-decoding? Would exploring branching at later stages in the decoding process improve overall performance?
Thanks for the question. We do have a study of branching at later stages in Figure 6, Appendix D due to space limit, and we show that branching at later steps is viable but incurs much higher computational cost.
For most tasks, we found that early branching significantly improves the diversity of potential paths, and is usually sufficient to uncover the CoT paths. With later branching, sometimes the model encounters difficulty recovering from incorrect paths (e.g., for the math question, after generating the token "5" the model is not able to recover from an erroneous path).
For some tasks though (e.g., the year parity task), mid-branching can help uncover more diverse paths, potentially leading to a better performance. We think it would be an interesting future direction to determine the optimal branching points depending on the specific task/question and efficiently search for the best possible paths.
> Extra computational cost and limitation to open-ended questions.
Thanks for the feedback. Yes, as discussed in the limitation section, our method does incur higher computational costs and in practice, we can use the optimal paths identified from CoT-decoding to further fine-tune the language model, which can help the model directly output those paths during inference time.
For open-ended questions, we discussed that for some tasks identifying the answer could be non-trivial, and we hope this can be better addressed in future research by better learning the model’s internal representation across a broader, more open-ended answer space. Fine-tuning with CoT-decoding paths can potentially help here as well, as the model learns to output its thinking process first when it is uncertain on open-ended questions.
---
Rebuttal 2:
Title: Thanks for your rebuttal
Comment: Thank you very much for the rebuttal. I found the information I wanted in the appendix. CoT-Decoding is an interesting discovery, and I have raised my score from 7 to 8, leaning toward clear acceptance.
---
Rebuttal Comment 2.1:
Comment: Thank you for going through our rebuttal and raising the score. We will move the branching discussion to the main text to make this information more accessible. | null | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Active, anytime-valid risk controlling prediction sets | Accept (poster) | Summary: This paper proposes a probabilistic strategy for stream-based active learning which allows for construction of anytime valid prediction sets. The strategy stems from maximizing the variance process of an e-process, which is also used in designing the prediction sets. Relevant theoretical guarantees (validity and regret) and proof-of-concept experiments are presented.
Strengths: For me, the most interesting contribution of the paper, is that it brings the perspective of safe betting to active learning.
The results are not written in the most flashy way, but the overall thought process is straightforward to follow and proofs are clear.
Weaknesses: - My assessment is that Section 2 lacks novelty, as in, very similar results have been proposed in earlier work under slightly different problem setting. Would be nice to make section 3 the main contribution of the paper, and present section 2 as a background/recipe section which leads to results of Section 3.
- The proposed e-processes of Eq (5) and (6) are both from results of Waudby-Smith [2023 & 2024]. If i understand correctly, both of these processes are proposed for a more general frame-work, and the current paper applies it to its problem setting.
- Proposition 1 and Theorem 1 are immediate corollaries by definition of the betting e-processes.
- It is best that this is merged together and presented as a corollary for the specific choice of $M_t$.
- Theorem 1 holds true for any $M_t$ is an e-process under the null. In the proof however, Proposition 1 is invoked. Would be good if the statement is updated and the assumption on $M_t$ is stated in it.
- I am not widely aware of the literature. I just wanted to raise this flag that the related works section might be inadequate. I was expecting more work to have been done in this area. After reading the paper, I do not have a clear image of short history of results leading to this work, and I think this speaks of an incomplete literature review. There's no related work from active learning literature, while they aim to solve a relatively close problem (in cases identical).
- The contributions section, particularly the second paragraph, is too technical given what it is already introduced. I would generally shorten the introduction and present a more thorough literature review in the main text.
- I think parts of the problem setting are confusing, if not incorrect and lines 44-52 should be re-written.
- Line 47 says $f$ outputs and action $A$ within a set $\mathcal A$. But later in Line 50, it is written that $f$ is a prediction set for $Y$.
- Connection between $\mathcal A$ and $\mathcal Y$ is missing. I'm assuming $f$ output subsets of $\mathcal Y$ and I don't understand the auxiliary confusing notation of $A$ and $\mathcal A$.
- If $f$ outputs a prediction set, then the notation $A$ and calling it an action is utterly confusing. Within active learning literature, $X$ is typically the action.
- It is not evident in the notation of the risk that $\rho$ depends on $f$.
Technical Quality: 4
Clarity: 2
Questions for Authors: 1) Why is assumption 1 in terms of $\rho$ and not just $r$? Is this needed? If possible, stating the assumption in terms of $r$ is cleaner in my opinion, since this way $\mathcal X$ and $\mathcal Y$ become assumption free.
2) The way section 2.2 is stated, what I collect from it is that the $M_t$ process is robust to adding a $\mathcal F_t$-adapted and bounded bias term. I can imagine that adding this bias may help with rapid increase of variance process of $M_t$. But I did not understand the motivation of this variance reduction, within the context of section 2. Could you please elaborate?
3) The proposed optimal labeling policy in Section 3 somewhat resembles a variance maximization techniques which are standard practice in stream-based active learning. Have you looked into the connections? How do these relate?
4) Doesn't the problem setting allow for comparison to standard active learning policies? Adding these would make the paper relevant to a considerable larger community.
Confidence: 2
Soundness: 4
Presentation: 2
Contribution: 2
Limitations: Limitations and assumptions are adequately stated.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Our responses to the highlighted weaknesses are as follows.
- **Novelty of e-processes.** Thank you for pointing out the error in the proof of Theorem 1 --- we will correct as you have described. While we agree that betting e-process was introduced in [1] and an inverse propensity weighted form was proposed in [2], we disagree that is simply an application of existing work. Our methods are novel in several ways.
1. The betting processes of [1, 2] are used to derive confidence sequences for the mean of a bounded (or lower bounded with bounded mean) random variable. In our setting, each betting process does not assume there is a different mean of the random variable tested under the null --- it assumes the same lower bound on the mean, that is $E[r(X, Y, \beta)] \geq \theta$, for each value of $\beta$. Here, our goal is to estimate $\beta^*$ --- we only use the fact that under the null, we can derive a random variable that has a bounded mean, and has a lower bound. Our methods are more similar to those in Lemmas 3.1 and 3.2 in [3]. In that work, however, the authors are only interested in designing confidence sequences for estimation, rather than outputting a parameter with risk control.
2. The anytime-valid risk control guarantee in Definition 1 is not the same as a merely providing a confidence sequence (CS) for $\beta^\*$. If one only provided a confidence sequence, it would not be clear which $\beta$ one should choose in the CS to ensure that $\rho(\beta) \leq \theta$ --- one of them is $\beta^*$ with high probability, but no guarantees would be made about any of the risk of any other $\beta$, we would not be able to guarantee we can always output a $\hat{\beta}_t$ that has risk control at each time step. Thus, key to showing the risk control guarantee in Definition 1 is also the assumption that $\rho$ is monotonic in $\beta$. The choice of $\beta$ that are safe and provide risk control are the ones that have been eliminated from the CS. In fact, the null of our e-process is *the opposite* of the risk control we want to achieve, since our resulting CS captures the set of $\beta$ that still might have risk that is *at least* $\beta$. This is a relatively novel formulation. Creating a CS to capture the opposite of the desired result one would want probabilistic guarantees for (in some sense) has been used in auditing election results with risk-limiting audits [4], but the guarantees and use in that application are quite different than ours.
3. In [2], there is an assumption that one always has access to an outcome $Y$ for each time step(i.e., the reward), since the application of interest is off policy evaluation, and some action is taken at each time step, and the corresponding reward is obtained. In our setting, we do not receive the outcome $Y$ at all if we did not query a label --- however, the weighted estimator is an unbiased estimator of the actual $r(X, Y, \beta)$ incurred at each time step, so our e-process is still correct.
- **Relate work on active learning.** See our point on comparing with active learning in the top-level rebuttal.
- **Problem setting seems confusing.** In line 50, we say that our action is a prediction set for $Y$ in many applications, b/c the risk we wish to control miscoverage. However, the action doesn't need to be a prediction set (e.g., could be the behavior policy of a safe robot [5]). Our usage of $X$ and $Y$ as covariate and label is standard from previous papers on RCPS, but we will make clear what $A$ means and refers to in our final version. We will also clarify the relationship between $\rho$ and $f$ explicitly. The relationship is as follows: we define $\rho(\beta) = E[r(X, Y, \beta)] = E[\ell(Y, f(X, \beta))]$.
Our responses to the questions are as follows.
1. **Assumption of monotonicity on $\rho$.** The assumption on $\rho$ is slightly more general than assuming $r$ is monotonic directly. We appreciate the suggestion and will clarify that monotonicity of $r$ implies monotonicity of $\rho$ and this is a useful special case of our assumption that is distribution free.
2. **How the predictor reduces variance.** The e-process in section 2.2 has $\bar{R}(\beta)$ inversely weighted by the probability of labeling when a label is queried. We can see that without the predictor set to always output 0, the variance of $\hat{r}(X, \beta) + L / q(X) * \bar{R}(\beta)$ is larger than $r(X, Y, \beta)$. However, in the extreme, if we had a perfect predictor, i.e., $r^*(X, \beta) = r(X, Y, \beta)$, then we recover exactly $r(X, Y, \beta)$. This has lower variance, and would result in our growth rate being identical to the e-process in Section 2 that queries every label. For more formal analysis, Theorem 3 encapsulates how the accuracy of the predictor results in changes in the growth rate of our e-processes.
3. **Relationship to variance maximization in active learning.** Our optimal policy aims to sample covariates $X$ with probability proportional to square root of the expected conditional risk squared --- when the optimal predictor is used, this quantity is the same as the conditional standard deviation of the risk. This is similar in spirit to existing active learning algorithms that aim to select data points that have the highest variance of predictions (e.g., [6]). There is a subtle point here that variance in predictions is not precisely the same as the variance in risk (i.e., in a prediction set setting, if the different predictions are covered by the prediction set anyways, then there is no variance in risk). In addition, many active learning algorithms are deterministic, i.e., they pick the points that best fit a criterion(e.g., conditional variance, disagreement, diversity, etc.) rather than sampling them with some probability.
4. **Comparison to standard active learning policies.** See previous points on related work in active learning and the relationship to variance maximization.
---
Rebuttal 2:
Title: Rebuttal (continued)
Comment: **Additional comments on problem setting**. For example, we may wish to calibrate the aggressiveness of a robot's behavior policy that is parameterized by $\beta$ (e.g., [5]), where the goal is to reach a destination while avoiding obstacles. The action $A$ may be the control policy of the robot based on the environment, and the risk would be the distance to the nearest obstacle over the entire trajectory of the robot as it travels to its destination. We may want this risk to be controlled on average over the distribution of environments. Our framework applies to such as setup, and hence we term the output based on the covariate $X$ an action $A = f(X)$ rather than limiting it to only being a prediction set.
We also include our top level comment on related work to active learning here:
**Comparison to active learning.** We briefly summarize the differences and similarities between active learning and our problem. Our problem objective and the methods use to prove their validity are different from typical active learning methods. Active learning (stream-based and pool-based) aims to minimize label queries and still learn the best possible machine learning predictor. Guarantees in this area usually are model-based or learning theoretic, i.e., they propose a model update/selection procedure and query procedure that minimizes the true risk of the model over a known or arbitrary function class, and derive results using a notion of class complexity (when one is developing a procedure that agnostic to the exact function class), or the methods are evaluated empirically for specific models, without guarantees. In contrast, our procedure tunes a calibration parameter that can be wrapped around any black-box model to provide a statistically rigorous risk guarantee. As a result, it means that other types of querying strategies that are deterministic (e.g., disagreement based, diversity based, etc.) cannot be directly imported to our problem setting, since the statistical guarantees we derive require that our queries are probabilistic. Further, we do not think existing active learning necessarily tackle the same objective, since they focus primarily on optimizing the performance of a classifier, rather than guaranteeing risk control while calibrating a parameter. Further development of how to leverage active learning methods in our setting is a fruitful direction for future work.
**References**
[1] I. Waudby-Smith and A. Ramdas. Estimating means of bounded random variables by betting. *Journal of the Royal Statistical Society Series B (Statistical Methodology)*, 2023.
[2] I. Waudby-Smith, L. Wu, A. Ramdas, N. Karampatziakis, and P. Mineiro. Anytime-valid off-policy inference for contextual bandits. *ACM / IMS Journal of Data Science*, 2024.
[3] P. Casgrain, M. Larsson, and J. Ziegel. Sequential testing for elicitable functionals via supermartingales. *Bernoulli*, 2024.
[4] I. Waudby-Smith, P. B. Stark, and A. Ramdas. Rilacs: Risk limiting audits via confidence sequences. *International Joint Conference on Electronic Voting*, 2021.
[5] J. Lekeufack, A. N. Angelopoulos, A. Bajcsy, M. I. Jordan, and J. Malik. Conformal Decision Theory: Safe Autonomous Decisions from Imperfect Predictions. arXiv:2310.05921, 2024.
[6] D. Cacciarelli and M. Kulahci. Active learning for data streams: a survey. *Machine Learning*, 2024.
---
Rebuttal Comment 2.1:
Comment: Thank you for your answers, and I apologize for the late response.
My questions are addressed, and I hope that the paper is updated accordingly -- I still think that the contributions would benefit a lot from a more crisp presentation e.g. by positioning the work more vividly.
Overall, I agree with reviewer 1rh6 and recommend for acceptance. | Summary: The authors, in their work, extend the framework of Risk Controlling Prediction Sets (RCPS) to a sequential setting where data is collected adaptively, providing anytime-valid risk guarantees. Additionally, it proposes a framework for active labeling, which allows for selective querying of true labels within a predefined budget, enhancing the utility of RCPS by leveraging predictors to estimate expected risk based on covariates. Next, the authors extend the setting further and develop an active learning setting and optimal labeling policy and under a fixed label budget.
Strengths: - Extended setting of a risk-controlling prediction set setting.
- Possibly rigorous in theoretical analysis.
Weaknesses: - The paper is notation-heavy, considers complex settings, and uses concepts that are not so commonly known. Unfortunately, the authors are doing nothing to help readers understand their work.
- Many symbols (and there are many) are only introduced once.
- The goal of the method is easy to miss in the body of the text.
- The experimental section is hard to understand without reading the theory sections, and there are no meaningful conclusions.
- There are no summaries, conclusions, schemas, pseudo-codes, or additional intuitions to help the reader understand the paper.
- Experiments feel very limited.
- I have doubts about the practicality of the introduced setting, as the paper needs better application examples.
Due to very limited time, I was not able to read related works that were necessary to fully understand this paper, verify the theory, and possibly appreciate it fully. I would like to believe that all the theories presented in the paper are right, but even assuming that. I think the paper is below acceptance due to quite a poor presentation.
Technical Quality: 2
Clarity: 2
Questions for Authors: - What are other examples of applications for the proposed framework?
- How in practise one should select $\theta$ and $\alpha$?
Confidence: 1
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: I see no potential negative social impact of this work. Discussion on limitations is limited.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Our response to each highlighted weakness and question are as follows.
- **Notational usage and clearer introduction of concepts.** We will clean up our notational usage and introduce more unfamiliar concepts more comprehensively (e.g., risk controlling prediction sets, e-processes, anytime-validity, etc.) in our final draft.
- **Symbol usage.** We will simplify this as well to make it more accessible, including incorporating the suggestions put forth by reviewer XpAy about the notation for the problem setting.
- **Goal of method is easy to miss.** The goal of our method is specified in Definition 1 and (3) in our paper --- we will highlight this definition and make it more apparent in the final version.
- **Experimental section requires reading theory section.** We will separate out the methods we are employing from the theoretical section so they can be read in a standalone fashion (i.e., in an algorithm environment). We do have meaningful conclusions concerning our experiments --- the "pretrain" and "learned" label policy/predictor combinations that both utilize the machine learning model being calibrated outperform (in terms of label querying efficiency) the naive baseline labeling policies/predictor combos. This shows that, in practice, the models we are calibrating can be used to estimate the conditional risk and variance to a sufficient degree of accuracy such that it significantly improves the label efficiency over the baseline approaches. We will clarify this emphasis in our final version.
- **Limited experiments**: We will aim to add more experiments from different machine learning methods and risk functions to illustrate the efficacy of our procedure (i.e., QA task for LLMs, and image labeling using the COCO-MS dataset).
- **Practicality of methods.** We will provide more concrete examples of applications, but we think our framework is quite widely applicable. Here are some examples of applications:
- *Reduce query cost in medical imaging.* A medical imaging system that outputs scores for each pixel of image that determines whether there is a lesion or not would want to utilize labels given by medical experts for unlabeled images from new patients. Since the cost of asking experts to label these images is quite high, one would want to query experts efficiently, and only on data that would be most helpful for reducing the number of highlighted pixels.
- *Domain adaptation for behavior prediction.* One reason we would want online calibration in a production setting is that we may have much different distribution of data that we do not have access to before deployment. For example, during a navigation task for a robot, we may want to predict the actions of other agents and avoid colliding into them when travelling between two points [1]. Since agents may behave differently in every environment, it makes sense to collect the behavior data in the test environment and update the behavior prediction in an online fashion to get accurate predictions calibrated for specifically the test environment.
- *Safe outputs for large language models (LLMs).* One of the goals with large language models is to ensure their responses are not harmful in some fashion (e.g., factually wrong, toxic, etc.). One can view this as outputting a prediction set for the binary label set of $Y \in \\{\texttt{harmful},\ \texttt{not harmful}\\}$. Many pipelines for modern LLMs include some form of a safety classifier, which scores the risk level of an output, and determines whether it should be output to the user or not [2, 3], or a default backup response should be used instead. One would want to label production data acquired from user interaction with the LLM and used to calibrate cutoff for the scores that are considered low enough for the response to be allowed through.
- **Choice of $\theta$ and $\alpha$.** The choice of $\theta$ and $\alpha$ will depend on the application the user has in mind. Though, since $\alpha$ is determines the probability the bound holds, reasonable default is to choose it $\alpha=0.05$. We would like to reiterate that a particular utility of our method is that the probability bound holds *uniformly over every time step*, that is the probability that there is a single $\hat{\beta}_t$ for all $t \in \mathbb{N}$ that has risk greater than $\theta$ is less than $\alpha = 0.05$. $\theta$ should be chosen based on the risk metric being controlled, and a reasonable default choice could also to be choose $\theta=0.05$. In the context of image classification, this would mean that the prediction set of possible classes will not cover the true label only 5\% of the time on average. We will elucidate this point more in the final version.
**References**
[1] J. Lekeufack, A. N. Angelopoulos, A. Bajcsy, M. I. Jordan, and J. Malik. Conformal Decision Theory: Safe Autonomous Decisions from Imperfect Predictions. arXiv:2310.05921, 2024.
[2] T. Markov, C. Zhang, S. Agarwal, F. E. Nekoul, T. Lee, S. Adler, A. Jiang, and L. Weng. A Holistic Approach to Undesired Content Detection in the Real World. *AAAI*, 2023
[3] L. Hanu and Unitary team. Detoxify. Github. https://github.com/unitaryai/detoxify, 2020.
---
Rebuttal Comment 1.1:
Title: Re: Rebuttal by Authors
Comment: Thank you for your detailed response to my rather short review. Other reviews also pointed out areas for improvement in terms of the presentation, so I hope you will address them in the next revision of your paper. Since the promised improvement in the presentation cannot be verified at the moment, I keep my score as it is. However, I'm okay with your work being accepted, since the other reviews recommended it. | Summary: The setting extends the model of Bates et al.---which provide confidence-bound type guarantees on the performance of a trained ``black-box" predictor which are parametrised, and nested with respect to a monotonic parameter $\beta$---to the online setting. The goal of the original setting is to provide risk-controlling prediction sets (RCPS) mapping a feature $X$ to a set $\mathcal{T}(X)$ in the label space, which is judged on the basis of a predetermined safety criterion. The online extension is then well-justified by the observation that the calibration data itself is often limited without deploying the model in practice, particularly if training occurs in an online fashion. \\
The contributions of the paper are:
1. An extension of the RCPS notion to the online setting, and a derivation of guarantees that are anytime-valid. In other words, confidence sets are refined as data is accrued, nevertheless maintaining risk control over the entire stream.
2. An extension of the RCPS which is valid under active learning, where a learner may choose to obtain a label based on the covariates, and a fixed total query budget.
In addition, the authors provide guarantees in the regret sense on the performance of derived methods in terms of the log-optimality criterion common for evaluating anytime-valid methods, which decouples into the regret of an exp-concave sequence dependent only on a quantity called the betting fraction, which due to log-concavity may be estimated sub-linearly in the number of rounds $T$ (due to exp-concavity of the log-optimality with respect to the betting fraction, this should be $O(\log(T))$) plus two concentration-type terms dependent on the convergence of two additional parameters derived from the risk, which may be estimated from the data. The convergence of such estimates will depend on the variance of the classifier's risk, and is determined on a case-by case basis. Experiments are included for verification of theoretical guarantees.
Conclusion:
While there are some weaknesses with regard to the communicability of the results for the intended audience, I believe on balance that the ideas in this paper will turn out to be of broad interest to the experimental community, and have clear practical relevance. From the theoretical perspective, although many of the ideas are directly generated from the theory of e-statistics, this is an encouraging example of tailoring theory to a clear practical need.
Strengths: 1. The formalism is well developed, and demonstrates a high-degree of understanding of the use of e-variables as diagnostic tools of testing processes, and the derived confidence methods are a natural extension of the theory to a very practical setting.
2. The extension to label-efficient active learning further pushes the abstract theory into the realms of practicability, and the authors provide an example of a set of estimators for the various relevant quantities in the experiments. This gives an (somewhat implicit) recipe for practitioners looking to deploy the tools developed.
3. The regret bound derived gives an interpretable decomposition of the optimality of the regret of the growth-function (used to bound the log-optimality criterion), which decouples statistical quantities from the betting fraction, which is obtainable from standard online learning methods.
4. The experiments (although arguably non-extensive) provide minimal necessary examples and concrete estimators for the relevant quantities present in the regret bounds, and demonstrate empirically the convergence of the sum total of relevant components to their optimal values.
Weaknesses: 1. A few more examples of the method being used in practice, along with the guarantees from the regret bound explicated, conditional upon the estimators idiosyncratic to the exact settings treated would be much more helpful than the abstract bound. Although for a theorist these results make sense, and keeping the general form due to the presence of the risk estimator variance and other empirical quantities does find some justification due to the variability of these quantities (which may alter the order of the regret bound), a few examples would really help illustrate the process + guarantees for a practitioner, which I guess would be the intended audience. I think this paper highlights the difficulty of the communicability of theoretical results in a digestible fashion, but does not constitute a weakness of the results themselves.
2. The experiments are not particularly extensive, and only encompass two examples; a contrived example of uniformly drawn features with Bernoulli labels (which is still a demonstrative diagnostic for the methods described, but have very well-behaved associated estimates), and a more realistic example based on the Imagenet dataset. Furthermore, as mentioned above, the comparison of online performance with respect to the regret bound would be even more helpful.
3. It would be nice to see some experiments illustrated with harder examples, for example in cases where the convergence of estimates would have a different than $O(\sqrt{T})$ rate.
Technical Quality: 4
Clarity: 2
Questions for Authors: 1. To my understanding, the convergence rate of the estimation terms largely dominate the $O(\log(T))$ rate from the $G^{\beta}$-contribution. Are there any non-trivial examples where this isn't the case?
2. Is there any hope of obtaining an adaptive rate in the growth function---perhaps by means of a tailored variance-reduced estimator---which could yield an instance-dependent result, if the true risk-variance is low? The estimation of $q$ and $r$ seems like it might be particularly troublesome if one tries to derive something for a general case, but this might be a misunderstanding on my part.
Confidence: 3
Soundness: 4
Presentation: 2
Contribution: 3
Limitations: I have discussed limitations of the work above, and it seems that the authors have laid out the limitations of the degree of practicability due to the i.i.d. assumption (i.e., if training is coupled to the observed data stream). Nevertheless, I still think this is a solid contribution. With regard to the societal impact, the development of a robust theory of confidence in machine learning is undoubtedly a good thing.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Here are our point-by-point responses to each of the highlighted weaknesses.
1. **More examples of the method being used in practice, along with examples of choice of estimators for the labeling policy and control variate regression.** We agree that providing more examples would illustrate the utility of our method better, and will add more concrete examples to our final draft. In particular, we note that often machine learning models have some estimate $\hat{P}(X)$ of the conditional distribution of $Y \mid X$ (e.g, class probabilities, conditional diffusion models, LLMs, etc.). Thus, for any realized covariate $x$, we can derive use $\mathbb{E}_{Y \sim \hat{P}(x)}[r(X, Y, \beta) \mid X = x]$ from the machine learning model as our choice of $\widehat{r}(x)$. This expectation can either be calculated analytically (as we do in or classification examples in our experiments) or derived using Monte Carlo (for generative models/LLMs where one can sample from the conditional distribution). In essence, we are already getting a predictor (for free in some sense) from the very model we are calibrating.
2. **More extensive experiments and empirical analysis of regret bound**. We will aim to add more experiments from different machine learning methods and risk functions to illustrate the efficacy of our procedure (i.e., QA task for LLMs, and image labeling using the COCO-MS dataset), as well as an empirical analysis of our regret bound in our experiments.
3. **Experiments with estimators of varying rates of convergence.** We agree that more experiments in easier or harder settings could illustrate how different rates of convergence affect our method. However, in practice, one would not train a model from scratch on the labeled data, but use the labeled data to fine-tune existing pretrained models. In fact, our framework is able to learn a labeling policy or predictor using outputs of the pretrained machine learning model we are calibrating, making the learning task much easier. The regret bound elucidates how the error of the label policy and predictor propagates into the growth rate. We do agree that the efficacy of these pretrained models would be different for different tasks, and we will be sure to include empirical analysis of how the accuracy pretrained (or learned) label policies and predictors affect the accuracy of our $\hat{\beta}$ estimate.
Our responses to your questions are as follows.
1. **Does the convergence rate of the estimation terms largely dominate the $O(\log(T))$ rate from the $G^\beta$-contribution?**
In terms of asymptotic rates, we agree with your point that the $O(\log(T))$ regret incurred by online Newton step (or other online learning algorithms) will be dominated by a typical estimation rate that scales with $O(1 / \sqrt{T})$ if we are training the label policy and predictor completely from scratch. When our pretrained machine learning models are already accurate, however, we can have fast rates. An example of this is if we assume that the optimal policy/predictor lies in a class of functions that is formed by a linear combination of multiple existing pretrained models we have access to (or we are only interested in comparing against the best linear combination). In that case, the estimation error is the regret of an online linear regression algorithm, and it will have $O(\log(T) / T)$ convergence (although we would be directly using the $G^\beta$ regret result, rather than the estimation error result). So our convergence rate does depend on the assumption we make about the accuracy of our pretrained models.
2. **Obtaining an adaptive rate.** Our result is already instance-adaptive in the sense that the optimal $G^\beta$ is adaptive to the $X$-conditional variance of the risk, i.e., it is lower bounded by a term that increases as $\mathbb{E}[\sigma_\beta(X)]$ decreases.
It is possible that when $\mathbb{E}[\sigma_\beta(X)]$ is low, the optimal predictor is estimated more quickly, and we believe exploration of adaptive convergence rates of different estimators would be an interesting problem to solve in future work. Our focus in this work is to show that the $G^\beta$ regret depends on estimator error of the policy and predictor in a direct way, and hence having pretrained models that can approximate those well will also improve the efficiency of estimating $\beta^*$.
---
Rebuttal Comment 1.1:
Comment: I acknowledge the authors' response, and thank them for their detailed answers to my points and questions.
While it appears that we've all had some difficulty related to the presentation, I feel that my understanding of the method and its utility/scope has been greatly enhanced by this discussion, and am intrigued to see this line of work built upon from both practice and theory (it's a bias, but the adaptive rates question certainly appeals). Furthermore, I do see this work as a very nice potential bridge between theory and practice, and would like to see more work in machine learning with this kind of scope.
I would advise a more targeted rework of the presentation of results (for example, by incorporating the concrete suggestions by XpAy) such that researchers with a more practical leaning can get what they need out of it. Otherwise, I stand by my assessment of the paper, and recommend it for acceptance. | null | null | Rebuttal 1:
Rebuttal: We have made point-by-point responses to each review, and posted our rebuttals to each review below. We would also like to note the following.
1. We appreciate the suggestions and concerns the reviewers have brought up about the paper. We will incorporate their suggestions for clarification and make our introduction and setup more accessible to the reader.
2. We will also provide more concrete application examples in the text. We address this in our response to reviewer qKw8, which we replicate here.
- *Reduce query cost in medical imaging.* A medical imaging system that outputs scores for each pixel of image that determines whether there is a lesion or not would want to utilize labels given by medical experts for unlabeled images from new patients. Since the cost of asking experts to label these images is quite high, one would want to query experts efficiently, and only on data that would be most helpful for reducing the number of highlighted pixels.
- *Domain adaptation for behavior prediction.* One reason we would want online calibration in a production setting is that we may have much different distribution of data that we do not have access to before deployment. For example, during a navigation task for a robot, we may want to predict the actions of other agents and avoid colliding into them when travelling between two points [1]. Since agents may behave differently in every environment, it makes sense to collect the behavior data in the test environment and update the behavior prediction in an online fashion to get accurate predictions calibrated for specifically the test environment.
- *Safe outputs for large language models (LLMs).* One of the goals with large language models is to ensure their responses are not harmful in some fashion (e.g., factually wrong, toxic, etc.). One can view this as outputting a prediction set for the binary label set of $Y \in \\{\texttt{harmful},\ \texttt{not harmful}\\}$. Many pipelines for modern LLMs include some form of a safety classifier, which scores the risk level of an output, and determines whether it should be output to the user or not [2, 3], or a default backup response should be used instead. One would want to label production data acquired from user interaction with the LLM and used to calibrate cutoff for the scores that are considered low enough for the response to be allowed through.
3. We appreciate the reviewers' concern about experiments and we will aim to provide more experiments in the final version (i.e., QA factuality task for LLMs, and image labeling for COCO-MS dataset).
4. **Comparison to active learning.** We briefly summarize the differences and similarities between active learning and our problem. Our problem objective and the methods use to prove their validity are different from typical active learning methods. Active learning (stream-based and pool-based) aims to minimize label queries and still learn the best possible machine learning predictor. Guarantees in this area usually are model-based or learning theoretic, i.e., they propose a model update/selection procedure and query procedure that minimizes the true risk of the model over a known or arbitrary function class, and derive results using a notion of class complexity (when one is developing a procedure that agnostic to the exact function class), or the methods are evaluated empirically for specific models, without guarantees. In contrast, our procedure tunes a calibration parameter that can be wrapped around any black-box model to provide a statistically rigorous risk guarantee. As a result, it means that other types of querying strategies that are deterministic (e.g., disagreement based, diversity based, etc.) cannot be directly imported to our problem setting, since the statistical guarantees we derive require that our queries are probabilistic. Further, we do not think existing active learning necessarily tackle the same objective, since they focus primarily on optimizing the performance of a classifier, rather than guaranteeing risk control while calibrating a parameter. Further development of how to leverage active learning methods in our setting is a fruitful direction for future work.
**References**
[1] J. Lekeufack, A. N. Angelopoulos, A. Bajcsy, M. I. Jordan, and J. Malik. Conformal Decision Theory: Safe Autonomous Decisions from Imperfect Predictions. arXiv:2310.05921, 2024.
[2] T. Markov, C. Zhang, S. Agarwal, F. E. Nekoul, T. Lee, S. Adler, A. Jiang, and L. Weng. A Holistic Approach to Undesired Content Detection in the Real World. *AAAI*, 2023
[3] L. Hanu and Unitary team. Detoxify. Github. https://github.com/unitaryai/detoxify, 2020. | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
ProSST: Protein Language Modeling with Quantized Structure and Disentangled Attention | Accept (poster) | Summary: In this paper the authors introduce ProSST, a method for training protein language models on both sequence and structure. They use a geometric vector perceptron trained with a structural denoising objective to obtain a structure encoder, and then perform k-means clustering on the embeddings of local residue neighborhoods in CATH to obtain a codebook of structural tokens. A masked language model is trained with disentangled attention, conditioned on the structural token inputs and the corrupted sequence.
Strengths: The approach follows a clearly promising and useful trend of incorporating explicit structural information into protein language models. ProSST improves on prior work by expanding the codebook size of structural tokenizers and showing the effects of disentangled attention for mutation effect prediction and improved pre-training perplexity. The authors ablate their design decisions to give evidence that these two changes are indeed improvements on traditional pLM design. The results on ProteinGym seem to significantly improve on pLMs without structural information, and even show noticeable improvements over SaProt.
Weaknesses: It is not clear why the structural quantization is a two-step process, involving training a separate structure encoder with a denoising objective, and then clustering to form the local codebook. An advantage of SaProt is the simplicity of the codebook and almost negligible cost of structural tokenization. How does this compare to the pre-processing needed for ProSST?
Technical Quality: 3
Clarity: 3
Questions for Authors: Can the authors give some indication about the increased time and computational cost of tokenizing structures using their approach?
Why is the two-step structural quantization approach used, rather than a typical VQ-VAE as used in FoldSeek? Are there advantages to training stability, simplicity, or inference time tokenization?
Have the authors investigated using smaller neighborhood sizes than 40 nearest neighbors? Is this a route to reducing the computational burden of structural quantization?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you very much for recognizing the novelty and contribution of our work. Your insightful comments helped us enrich the analysis a lot. With the response below, we hope to address your concerns properly.
**Weakness:**
**W1.** In fact, we do not need to train the structure encoder and the clustering model for every protein. Our structure quantization serves as a data preprocessing step. The codebook is only pre-trained on a pre-training structure encoder dataset (Appendix A.2, Dataset for training structure codebook). SaProt uses FoldSeek [1] as its structure codebook, which also needs pre-training. Since training a codebook is FoldSeek's task, it isn't explicitly detailed in SaProt's article. The primary difference between our structure quantization module and FoldSeek is that our local structure considers up to the nearest 40 residues within a 10 Å distance, whereas FoldSeek only incorporates structure information between preceding and succeeding residues.
**Q1.** Tokenizing a protein structure containing $L$ residues involves three steps:
**Step 1.** For $i = 1, 2, 3, \ldots, L$, generate a local structure for residue $r_i$, converting it into a graph $G_i$. (Left side of Figure 2c)
**Step 2.** For $i = 1, 2, 3, \ldots, L$, encode each graph $G_i$ into a continuous vector $e_i$ using the trained structure encoder.
**Step 3.** For $i = 1, 2, 3, \ldots, L$, utilize the trained clustering codebook to convert $e_i$ to a token $s_i$.
We also evaluated the time required to convert a protein into structure tokens for five proteins of different lengths using a server equipped with an RTX 1080 GPU and a Intel Xeon E5-2690 CPU. Step 2 was executed on the GPU, while steps 1 and 3 were executed on the CPU. The results are as follows.
| Protein Name (Uniprot_ID) | Length (Local structures) | Step 1 | Step 2 + Step 3 |
| --- | --- | --- | --- |
| CCDB_ECOLI_Adkar_2012 | 101 | 0.29s | 4.43s |
| ESTA_BACSU_Nutschel_2020 | 212 | 0.67s | 4.27s |
| PTEN_HUMAN_Matreyek_2021 | 403 | 1.06s | 4.45s |
| ENV_HV1B9_DuenasDecamp_2016 | 853 | 3.24s | 5.63s |
We believe the speed is acceptable. Nevertheless, one of our important future tasks is to use parallelization techniques to optimize the speed of subgraph segmentation (Step 1) and local structure encoding (Step 2). We are working on this development and will release it in the camera-ready version.
**Q2.** Both our structure quantization module and FoldSeek involve two steps: training a structure encoder and a clustering model. We chose the denoising autoencoder for its focus on representation learning and simplicity, which better suits our needs. While VAE and VQ-VAE are generative models, we do not require a structure generation model. The training curve of our structure encoder is shown in **Figure R9**. The change of training loss and validation loss is stable.
**Q3.** Smaller neighborhood sizes may ignore some important neighbors. Although using neighborhood sizes is a route to accelerate average pooling, it may mislead some neighboring nodes. We selected the nearest **40** residues within a distance of **10 Å**, according to [4]. As shown in Figure R1, local structures with more than **40** neighbors account for only 0.00052% (5.2e-6). Thus, 40 can cover most cases.
References
[1] Clustering predicted structures at the scale of the known protein universe.
[2] Convolutions are competitive with transformers for protein sequence pretraining.
[3] https://www.uniprot.org/uniprotkb/statistics
[4] Discovery of Novel Gain-of-Function Mutations Guided by Structure-Based Deep Learning.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their response and will maintain my score.
---
Reply to Comment 1.1.1:
Comment: We deeply appreciate the dedication you have shown in evaluating our submission and offering your comments. Your insightful recommendations have played a crucial role in improving the clarity and rigor of our work. Regardless of the ultimate outcome, your input has enhanced the clarity and quality of our work. We extend our heartfelt thanks to you! | Summary: This paper focuses on the protein representation task. ProSST introduces a quantized method to combine the information from protein structure. Additionally, the authors also propose a disentangled attention mechanism on top of the quantized structures to learn the relationship between residue and structure token sequences. They claim that the proposed method outperforms state-of-the-art methods in several tasks under both zero-shot and supervised learning settings.
Strengths: 1. The paper is easy to follow and the presentation is clear.
2. The authors proposed an interesting approach to merge representations of two protein language models based on sequences and structures. The authors provide extensive experiments to confirm the effectiveness of the proposed methods.
3. While cross-attention between sequence and structure has been widely explored before, the proposed disentangled attention seems novel. The introduction of the relative position encoding also seems beneficial according to the ablation.
Weaknesses: 1. The novelty is somewhat limited. I do believe such work is meaningful for computational biology, but this is not the first hybrid approach to protein language models (such as ESM-GearNet [1]).
2. Regarding the experiment, I strongly suggest comparing ProSST to ESM-GearNet, which I think is a fairer comparison considering the utilization of both sequence and structure.
3. Besides comparing the number of parameters, I suggest the authors provide experiments on inference speed. In my experience, clustering is often time-consuming, especially for long sequences and multiple cluster centers (K > 2000 in this paper).
[1] A Systematic Study of Joint Representation Learning on Protein Sequences and Structures
Technical Quality: 3
Clarity: 2
Questions for Authors: 1. How does ProSST deal with extremely long protein sequences?
2. How to deploy this method to sequence-only datasets like the PEER benchmark [2]? It seems ProSST requires both sequence and structure as inputs.
[2] Peer: a comprehensive and multi-task benchmark for protein sequence understanding
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: My main concern is the experiment setting mentioned in the Weaknesses part. Authors are welcome to answer my questions and I am leaning to raise my rate.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely appreciate your thoughtful feedback on our work and provide a thorough response addressing your main concerns.
**Weakness**
**W1.** We agree that our work is not the first hybrid approach to protein language models, and we have cited works such as ESM-GearNet and LM-GVP in Section 2.1. However, what sets our work apart is our disentangled attention mechanisms and structure quantization module. We redesigned the self-attention mechanism in the Transformer to accommodate multiple sequence inputs, including protein sequence, structure sequence, and relative position matrix sequence. Meanwhile, the structure quantization module can encode local structure of residues with more comprehensive information. Evaluation results demonstrate the efficacy of our model across various protein understanding tasks, especially in zero-shot mutant fitness prediction. We contend that the overall model, local structure encoding and quantization, the disentangled attention mechanism, and the evaluation results jointly represent a valuable contribution to the field of computational biology.
**W2.** We evaluated ESM-GearNet on the fine-tuning task and compared it to ProSST. The results are as follows:
| | DeepLoc | Metal Ion Binding | Thermostability | GO-MF | GO-BP | GO-CC |
| --- | --- | --- | --- | --- | --- | --- |
| ProSST | 94.32(±0.10) | 76.37(±0.02) | 0.726(±0.04) | 0.682 (±0.003) | 0.492 (±0.004) | 0.504 (±0.002) |
| ESM-GearNet | 93.55 | 74.11 | 0.651 | 0.676 | 0.516 | 0.507 |
The scores of ESM-GearNet are derived from SaProt. Note that the slightly different scores compared to the manuscript are due to the updated random seed and dataset. We have re-evaluated ProSST with different seeds in order to compute the average performance. Additionally, the evaluation was conducted on the updated GO dataset by SaProt.
**W3.** The clustering model is trained only once. The training and inference speeds of the clustering model are as follows:
| K | Clustering Model Training Time | Clustering Model Inference Time |
| --- | --- | --- |
| 20 | 5m 22s | 5 ms |
| 128 | 9m 39s | 5 ms |
| 512 | 26m 41s | 8 ms |
| 1024 | 51m 16s | 15 ms |
| 2048 | 98m 4s | 26 ms |
| 4096 | 185m 50s | 250 ms |
We also show the inference time of ProSST(110M), SaProt(35M), SaProt(650M) in **Figure R8**. ProSST(110M) is faster than SaProt (650M) and slower than SaProt (35M). The experiments are conducted on a server with one RTX 3090 GPU and two Intel 8468 CPUs. We will discuss inference speed in revised paper.
**Questions**
**Q1.** ProSST utilizes relative positional encoding, which supports inference without constraints on sequence length. For pre-training, we removed proteins longer than 2048 residues for training efficiency (extremely long sequences may cause an overflow of CUDA memory). However, we do not do any truncation during fine-tuning and zero-shot mutant effect prediction. Furthermore, we have evaluated the perplexity of ProSST on a long protein dataset containing 594 proteins longer than 2048 residues. The statistics of the sequence lengths and evaluation results are as follows:
| Count | Length Mean. | Length Min. | Length Max. | Length Std. | Perplexity |
| --- | --- | --- | --- | --- | --- |
| 594 | 2313 | 2049 | 2699 | 180 | 9.013 |
The perplexity is **9.013**, which is similar to the validation set perplexity of **9.033**, suggesting that ProSST can effectively understand long protein sequences. Another notable point is that extremely long protein sequences are very scarce in the nature, as almost **99.7%** of proteins are shorter than 2048 [1]. We will add the discussion about long sequences to the manuscript.
**Q2.:** The current implementation requires to input both sequence and structure. Extending the framework to accept sequence-only datasets is possible. Here, we provide three approaches:
**Approach 1:** Use AlphaFold or ESMFold to predict structures.
**Approach 2:** Use ProSST(MST), trained with structure masking and supporting sequence-only inputs. We will release a model ProSST(MST), an extension of the ProSST model that incorporates Masked Structure Training (MST). The MST means that during pre-training, each sample's structure sequence has a 50% probability of being replaced by a fully masked sequence [1,1,1,1,1,…,1], simulating missing protein structure. Therefore, when applying ProSST to sequence-only datasets, we need to use the masked sequence [1,1,1,1,1,…,1] as a substitute for the structure token sequence.
We have evaluated these methods on the ProteinGym benchmark, binary localization prediction (BLP) from the PEER benchmark, and perplexity on the validation set. The results are as follows:
| Model | Structure Source | ProteinGym | BLP(Peer) | Perplexity |
| --- | --- | --- | --- | --- |
| ProSST (K=2048) | AlphaFold2 | 0.504 | 94.32 | 9.033 |
| ProSST (K=2048) | ESMFold | 0.471 | 92.73 | 9.144 |
| ProSST (MST) | Missing | 0.438 | 91.84 | 10.325 |
| ProSST (MST) | AlphaFold | 0.456 | 92.31 | 9.447 |
| ProSST (K=0) | Missing | 0.392 | 89.65 | 12.190 |
Rows 1-2 show the performance differences between AlphaFold and ESMFold. Rows 3-4 show the performance of the new model ProSST(MST). Row 5 shows the performance of the sequence-only model. A discussion on how to deploy ProSST on sequence-only datasets will be added to the revised version.
Reference
[1] https://www.uniprot.org/uniprotkb/statistics
---
Rebuttal Comment 1.1:
Comment: Thank you for the responses, I believe my concerns have mostly been addressed, although I still think there exists many similarities with previous work. I do think that this is now above the acceptance threshold and have changed my score.
---
Reply to Comment 1.1.1:
Comment: We thank you for taking the time and effort to review our research and engage with our responses. Your comments have helped us refine our paper. Regardless of the final outcome, your feedback has helped us better define and present our research objectives. | Summary: The paper proposes ProSST (Protein Structure-Sequence Transformer), a novel PLM incorporating sequence and structure information. ProSST can be split into two parts: a modified version of the Transformer architecture and a quantization module. The quantization module consists of a structure encoder using a Geometric Vector Perceptron (GVP). The authors train the GVP on a large dataset of protein structures (18.8 million) in which they encode residues by contextualizing local structures with surrounding ones in a manner more robust than current methods. They then discretize local residues into tokens using a pre-trained k-means clustering model. ProSST adds relative positional information to the structure before feeding it into the Transformer architecture. Instead of standard self-attention, the authors propose using disentangled attention, a method that allows the model to process contextual information from sequence and structure tokens. They test their model on various downstream protein function tasks, for which they achieve state-of-the-art performance across almost all metrics.
The main contributions of the paper are as follows:
● Introduced a novel method for protein structure quantization by incorporating contextual structural information into a PLM using a structure autoencoder
● Proposed a nonstandard attention calculation that allows for sequence and structure information to be incorporated into protein function prediction
Strengths: - The paper clearly defines their contributions at the beginning of the paper and justifies most of the design choices through a proper literature review.
- The paper presents current gaps in research and proposes relevant methods to bridge this gap.
- The methods section provides sufficient design details, allowing for the model architecture to be reproduced.
- Similarly, the formulation for both disentangled attention and the training objective are well defined.
- The tasks outlined in the experiments section range in variety, covering many different facets of protein function prediction.
- Following this, the ablation study provided by the authors is robust in its analysis of architecture choices, justifying empirically the choices that are made.
- Overall, some of the claims made in the introduction are justified by the experimental results.
Weaknesses: - Despite having clearly formulated reasons for choosing their quantization methods, the authors need a similar justification for disentangled attention. There is little discussion on this design choice until after the related works section. Similarly, there is little experimental analysis of disentangled attention outside the ablation study. This is surprising as the departure from standard self-attention allows for a nearly four percent increase in performance, which is just as impactful as their quantization module. The paper talks about learning the connections between structural tokens and sequence tokens as the main contribution of ProSST but never demonstrates the connection.
- The paper conveys the generalization power of their model by testing it on multiple downstream protein function prediction tasks. ProSST outperforms nearly every other model on these benchmarks, yet some experimental setups raise concerns. Most glaringly, the paper provides little information on their testing regimen across all training tasks. In the experiments that do outline testing, the authors describe extremely small validation sets. For example, in the training phase, the authors train ProSST on 18.8 million samples, with 100,000 being used for validation (~ 99:1 split). Similarly, the split for training the structure codebook is 31,070 train points with only 200 points for validation (~ 99:1 split). The paper needs to provide more information on the testing regimen for the fine-tuning tasks. Along with omitting these training details, the authors seemingly make many mistakes in reporting their results. First and foremost, the paper provides no error or significance analysis despite including empirical evaluations of ProSST (which is required in the NeurIPS paper checklist).
- In Table 3 and Table 4 on page 8, the paper evaluates the effectiveness of ProSST, marking the best results in bold. When comparing the best results from both tables, it is clear that the ablation study was done with K=2048 being their clustering hyperparameter due to the matching accuracies (although this is not reported explicitly). However, almost all of the other metrics outside of accuracy for this experiment do not match, which is either a copying mistake by the authors or is indicative of a different experiment. If the latter explanation is true, then the variance in the results indicates that the paper should have been more thorough in its analysis of the model’s performance across initializations and data splits. Along with the extremely small validation set sizes and little testing information, this makes me skeptical of the empirical results section. In addition, in Table 1, the authors incorrectly bold their model’s performance on the NDCG metric when it is clear that five other models outperform theirs on this specific metric.
Another area for improvement of the paper is its analysis of the structure quantization module. The model has a four percent increase in performance when jumping from K=0 to K=20 and has diminishing returns from there onward. At K=0, there are no structure tokens passed to the model, meaning that disentangled attention is unable to work effectively in this experimental setting. This becomes an issue when you compare the results to the ablation study on disentangled attention; it is difficult to compare the effects of the quantization module to disentangled attention. Because of this, their results might suggest that the increase in performance is due to disentangled attention, not necessarily their quantization module.
The conclusion section is also a weakness of thenot give much insight into the methods proposed and instead reviews the methods section. It lacks a thorough ana paper. It does lysis of the paper's results.
Technical Quality: 3
Clarity: 3
Questions for Authors: Is there a reason why several of the baselines (e.g., EVmutation, DeepSequence, WaveNet, UniRep, RITA, ProGen2, and VESPA) and datasets (DMS data other than thermostability) from ProteinGym are not included in this study?
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: - Contribution is limited given that now ESM3 and many other PLMs consider structure in the language model
- There has been no discussion on several protein functions that do not benefit from rigid 3D structure, as predicted by AF2/3, e.g., disordered protein functions with floppy regions: how would ProSST handle those cases?
Conclusion:
This paper provides interesting methods for improving PLM performance by intelligently incorporating structural information into protein function prediction problems. The two novelties of this paper come from their quantization method and disentangled attention application. Despite providing compelling methods and promising results, there are many details that are glossed over, which over the course of the paper accrue many concerns from the reader on the validity of the results. My suggestion to the authors is to provide more insight into their choice of disentangled attention (related works section) and analysis of the results (conclusion). Similarly, the experiment section and supplemental materials need to be updated with enough information for the reader to faithfully reproduce the results according to best practices. In general, there is enough concern from the experimental results that warrant justification and response from the authors.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your meticulous feedback. Below are our responses.
**Weakness:**
**W1.** We conducted additional experiments to analyze disentangled attention.
**Experiment 1**: We replaced all structure tokens in the test set with zeros or random numbers from a uniform distribution and re-evaluated ProSST. The results are:
|Structure|ProteinGym|Perplexity|
|---|---|---|
| Original|0.504|9.033|
| All-zero|0.112|14.524|
| Random|0.182|14.024|
Incorrect structure tokens decreased performance, suggesting that **disentangled attention learned the sequence-structure relationship.** Otherwise, performance would have been less affected.
**Experiment 2**: We train ProSST (K=1), where the structure tokens are replaced with a constant value of 1. This setup helps preserve the disentangled attention mechanism. If ProSST (K=1) still improves performance, it indicates that the improvement is solely due to the disentangled attention.
| |DeepLoc|ProteinGym|Perplexity|
|---|---|---|---|
|ProSST(K=2048)|94.32(±0.10)|0.504|9.033|
|ProSST(K=1)|89.48 (±0.24)|0.390|12.182|
|ProSST(K=0)|89.77 (±0.26)|0.392|12.190|
There is little difference between K=1 and K=0, as their perplexity curves (see **Figure R3**) almost overlap. This indicates that **disentangled attention cannot improve performance without correct structure tokens**.
**Experiment 3:** Visualizing the learned attentions. We show different types of attentions on Green Fluorescent Protein (GFP), including 238 residues, in **Figure R4**. We can see that **disentangled attention learns different attention patterns**, with notable differences between R2S and S2R.
**W2.**
(1) **Dataset Split**
(1.1) There is no test set for Pre-training for the structure codebook and Transformer. We use only a small validation set (without a test set) for pre-training to maximize data usage, similar to pre-trained models like ESM-2, which use 0.5% of data for validation [1].
(1.2) The fine-tuning datasets have train/valid/test splits like SaProt's and are downloaded from SaProt. Data statistics will be provided in the revised paper.
(1.3) In this zero-shot mutant effect prediction, all data are in the test set.
**(2) Error and Significance Analysis**
(2.1) We will add error analysis to fine-tuning datasets. We use the same hyperparameters and repeated experiments five times with different seeds. The average performance served as the metric, with the standard deviation as the error. The results are:
| |DeepLoc|Metal Ion Binding|Thermostability|GO-MF|GO-BP| GO-CC |
|---|---|---|---|---|---|---|
|ProSST|94.32(±0.10)|76.37(±0.02)|0.726(±0.04)|0.682(±0.003)|0.492(±0.004)|0.504(±0.002)|
Note that the slightly different scores compared to the manuscript are due to the updated random seed. Additionally, the evaluation was conducted on the updated GO dataset by SaProt.
(2.2) The error in the ablation study tables will be updated in the revised version, which would not change the performance ranking.
(2.3) We used the non-parametric bootstrap method for zero-shot mutant fitness prediction to test differences between the baselines and ProSST. In all tests, the p-values were less than 0.01.
**W3.** The inconsistency between Table 3 and Table 4 was due to a copying error. The data for ProSST (K=2048) in Table 3 is correct, but an error occurred when copying to Table 4. The performance of ProSST in Table 4 on ProteinGym should be 0.504 for Spearman, 0.777 for NDCG, and 0.239 for Top-recall. We will correct these and fix the black marking issues in all tables.
**W4.** Disentangled attention cannot improve performance without correct structure tokens. We train ProSST (K=1), where the structure tokens are replaced with a constant value of 1. This setup helps preserve the disentangled attention mechanism. If ProSST (K=1) still improves performance, it indicates that the improvement is solely due to the disentangled attention. However, ProSST(K=1) is not better than ProSST(K=0). The results can be referred to response to **W1**.
**W5.** We will emphasize the reasons for selecting discrete structure tokens and using disentangled attention, and we will highlight the analysis of the relationship between them, as discussed in W1.
**Questions:**
**Q1.** We select baselines with high Spearman correlations from the ProteinGym webpage [2]. We also compared the mentioned baselines to ProSST and will update it in our paper. The results are as follows:
|Model|ρs|NDCG|Top-recall|
|---|---|---|---|
|EVmutation|0.395|0.777|0.222|
|DeepSequence|0.407|0.774|0.225|
|WaveNet|0.373|0.761|0.203|
|RITA|0.372|0.751|0.193|
|UniRep|0.19|0.647|0.139|
|ProGen2|0.391|0.767|0.199|
|VESPA|0.394|0.759|0.201|
|ProSST|0.504|0.777|0.239|
Since structure models often excel in the Stability subset, we additionally compared ProSST with other models in this subset. We will include other ProteinGYM subsets (Stability, Activity, Binding, Expression) in the Appendix. ProSST achieves state-of-the-art (SOTA) performance in the Stability, Binding, and Expression subsets.
**Limitations:**
**L1.** We believe our model is a valuable addition to structure-aware protein language models. Note that ESM-3 was introduced in June 2024, which was after our submission to NeurIPS in May 2024.
**L2.** We utilize AlphaFold to predict structures of disordered proteins. We provide the relationship between AlphaFold pLDDT and the performance of structure models like ProSST, SaProt, and ESM-IF on ProteinGYM, as shown in **Figure R5-R7**. There is a positive correlation between pLDDT and model performance: for ProSST, $\rho=0.30$; for SaProt, $\rho=0.31$; and for ESM-IF1 $\rho=0.42$, where $\rho$ is the Pearson correlation coefficient. This may indicate that structure models may not perform well on disordered proteins.
References
[1] Evolutionary-scale prediction of atomic-level protein structure with a language model.
[2] https://proteingym.org/benchmarks
---
Rebuttal Comment 1.1:
Title: Thanks for the response
Comment: My concerns are mostly addressed. I increase my score.
---
Reply to Comment 1.1.1:
Comment: We are truly grateful for the time and effort you dedicated to reviewing our work and considering our responses. Your constructive feedback and expert advice have significantly enhanced the quality of our research. Regardless of the final decision, your thoughtful engagement has greatly clarified the objectives and structure of our paper. Thank you for your invaluable contribution! | Summary: This paper presents ProSST, a new language model for protein data that captures both structural and sequential modalities of proteins. The protein's structure is tokenized using a graph-based auto-encoder architecture, where each residue's local structure is embedded into a vector in a high-dimensional embedding space and then classified using a $k$-means clustering algorithm within that embedding space. The two sequence-based and structure-based data streams are then passed to multiple disentangled attention layers, allowing the model to better capture the connections between structure and sequence data. Numerical results showcase the outperformance of ProSST over several baselines across a range of downstream tasks.
Strengths: - Fusing protein structure and sequence data is an important and timely research topic, and the proposed architecture provides a novel way of combining these two modalities to derive more expressive protein representations.
- The numerical results and ablation studies are quite comprehensive and show considerable gains over state-of-the-art benchmarks.
- The paper is very well written and easy to follow for the most part.
Weaknesses: The main weakness of this work, in my opinion, is that both structural and sequential data are required for the model to derive the final protein representations, since the amount of available structural data is lower than that of sequence data. I wonder if there is any way in which the model can operate on sequence input alone, with the structure knowledge embedded into the model parameters. Especially, how does the model work on protein sequences for which no structural data is available? Is there a "sequence-only" mode that the model can revert to (e.g., by feeding a "default/noise" structure input alongside the actual sequence)? I see the ProSST (-structure) model in Table 1, but that seems to be a model that was only trained on sequence data, so the training pipeline and its parameterization are completely different than the complete ProSST framework.
Technical Quality: 3
Clarity: 4
Questions for Authors: - What is the reasoning behind choosing **40** residues as the neighborhood of each node in Section 3.1?
- Is $L$ on line 128 the same as $l$ on line 137?
- Could you please elaborate on how the GVP encoder is parameterized?
- Have you studied what would happen if, instead of discrete structure tokens on the bottom-right of Figure 1, the centroid embedding of each cluster is used as the input structural vector? Also, what if the continuous local structure embeddings are directly used and the clustering is removed altogether?
- Could you explain the average pooling that happens on line 146? In particular, what does $l$ precisely represent here? Is it the 40-node neighborhood of each residue's local graph?
- What is the difference between the disentangled attention in Eq. (1) and a regular attention mechanism, where the tokenized representations of each residue are simply concatenated together?
- What is $k$ in Eq. (2)? Is it the same as the number of clusters in $k$-means clustering of structural embeddings?
- Why is the structure $s$ in Eq. (6) left unmasked? Shouldn't you also mask $s$ when you make the residue tokens in the sequence data? On a related note, shouldn't the structure of the variant sequence be changed in the first term of Eq. (8) when calculating the scores?
- It would be helpful if you could also compare your approach with ESM-GearNet, and also provide the results of supervised downstream prediction with frozen embeddings in Section 4.2 (as opposed to fine-tuning the model).
- Minor points: $e$ should be replaced by $e_i$ on line 154, and $m$ should be replaced by $M$ in Eq. (6).
Confidence: 4
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: As the authors allude to, the main limitation of the model is the assumption of the availability of structural data and the computational complexity of the structural quantization process.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely appreciate your thoughtful feedback on our work. We would like to address your questions and concerns as follows:
**Weaknesses**
**W1.** Feeding a "default/noise" structure input alongside the actual sequence was not included in the current version, but we have a solution for it, and this solution is not very sophisticated. We trained ProSST(MST), where MST stands for Masked Structure Training. During pre-training, each sample's structure sequence has a 50% probability of being replaced by a fully masked sequence [1,1,…,1]. This approach simulates the scenario of missing protein structures. Although ProSST(MST) outperformed ProSST(K=0) in sequence-only mode, it was inferior to both the original ProSST model with structure inputs and ProSST(MST) with structure inputs.
We offer two approaches for sequence-only proteins:
- Approach 1: Use AlphaFold or ESMFold to predict structures.
- Approach 2: Use the ProSST(MST).
We have evaluated these methods on the ProteinGym benchmark and perplexity on the validation set. The results are as follows:
|Model|Structure|ProteinGym|Perplexity|
|---|---|---|---|
|ProSST (K=2048)|AlphaFold2|0.504|9.033|
|ProSST (K=2048)|ESMFold|0.471|9.144|
|ProSST (MST)|Missing|0.438|10.325|
|ProSST (MST)|AlphaFold|0.456|9.447|
|ProSST (K=0)|Missing|0.392|12.190|
Rows 1-2 show the performance of Approach 1. Rows 3-4 show the performance of the new model ProSST(MST). When using AlphaFold, ProSST (MST) is inferior to ProSST (K=2048). We will add a discussion on how to apply ProSST to the revised paper.
**Questions:**
**Q1.** The reason is that **40** covers most cases. We selected the nearest **40** residues within a distance of **10 Å**, according to [1]. As shown in **Figure R1**, local structures with more than **40** neighbors account for only 0.00052% (5.2 x 10^-6), indicating that our choice covers most cases.
**Q2.** They are not the same. $L$ is the number of residues in a protein, also referred to as the length of the protein, while $l$ is the number of nodes in a graph. A protein contains $L$ residues, each corresponding to a local structure, which in turn corresponds to a graph $G$. For any arbitrary graph $G$, $l$ is the number of nodes in it.
**Q3.** The GVP encoder includes a six-layer message-passing graph neural network in which a geometric perceptron replaces the MLP to ensure translational and rotational invariance of the input structure. Our GVP encoder is consistent with the original GVP-GNN [2], except that we removed the residue type information. The GVP encoder is trained from scratch. We will provide detailed descriptions and parameterizations of the GVP in the Appendix.
**Q4.** Both choices are worse than discrete structure tokens. (1) Using the centroid embedding (K=2048) as input yields inferior results:
|Structure Inputs|ProteinGym|DeepLoc|Perplexity|
|---|---|---|---|
|Centroid Embeddings|0.462|91.73(±0.24)|9.932|
|Structure Tokens|0.504|94.32(±0.10)|9.033|
We believe it is because the Transformer requires a learned structure token embedding rather than a fixed embedding.
(2) Directly using continuous local structure embeddings as structure inputs leads to overfitting, as shown in **Figure R2**. Similar results have also been observed in SaProt.
**Q5.** $l$ represents the total number of nodes in a graph $G$. Average pooling refers to averaging the embeddings of all nodes within the graph. $l$ is not always equal to 40. Because when there are not more than 40 residues within a 10 Å distance, $l$ will be less than 40.
**Q6.** The disentangled attention contains multiple regular attention mechanisms, allowing the model to learn different attention patterns for better contextual representation. The computation of the disentangled attention is also less than that of directly concatenated self-attention because the complexity of the attention mechanism increases quadratically with the sequence length. Although disentangled attention involves additional attention calculations, it does not increase the sequence length.
**Q7.** The $k$ in Eq.(2) should actually be $L_{max}$, which is the cutoff of relative position. We will correct it in the paper.
**Q8.** (1) Because our model aims to learn the contextual representation of residues rather than structure tokens, structure $s$ in Eq.(6) is left unmasked. (2) When calculating scores of variants, we utilize the structure of wild sequences because mutants only slightly alter the structure of proteins [3], and the structure of mutants is difficult to predict[4]. Other structure models, including ESM-IF, SaProt, ProteinMPNN, etc., also use the wild-type structure when scoring variant sequences.
**Q9.** We have compared ProSST to ESM-GearNet and ProSST(fixed parameters) in fine-tuning downstream tasks. The results are as follows:
| | | DeepLoc | Metal Ion Binding | Thermostability | GO-MF | GO-BP | GO-CC |
| --- | --- | --- | --- | --- | --- | --- | --- |
| ESM-GearNet | 690M | 93.55 | 74.11 | 0.651 | 0.676 | 0.516 | 0.507 |
| ProSST(fixed) | 110M | 92.36 (±0.24) | 74.27 (±0.15) | 0.697 (±0.06) | 0.651 (±0.013) | 0.479 (±0.013) | 0.482 (±0.009) |
| ProSST | 110M | 94.32(±0.10) | 76.37(±0.02) | 0.726(±0.04) | 0.682 (±0.003) | 0.492 (±0.004) | 0.504 (±0.002) |
The scores of ESM-GearNet are derived from SaProt. Note that the slightly different scores compared to the manuscript are due to the updated random seed and dataset. We have re-evaluated ProSST with different seeds to compute the average performance. Additionally, the evaluation was conducted on the updated GO dataset by SaProt.
**Q10**. We will correct them. Thank you.
References
[1] Discovery of Novel Gain-of-Function Mutations Guided by Structure-Based Deep Learning.
[2] Learning from Protein Structure with Geometric Vector Perceptrons.
[3] A folding space odyssey.
[4] Can AlphaFold2 predict the impact of missense mutations on structure?
---
Rebuttal Comment 1.1:
Comment: Thank you. I have decided to increase my score in favor of acceptance after reading the rebuttal and the rest of the reviews. I believe the paper presents a worthwhile contribution that is of interest to the NeurIPS audience.
---
Reply to Comment 1.1.1:
Comment: We sincerely appreciate the time and effort you dedicated to reviewing our paper and reading our responses. Your valuable suggestions and insightful comments have significantly contributed to refining the quality of our work. No matter what the final result will be, the thoughtful communication with you has better clarified our paper's goals and logic. We would like to express our great gratitude to you! | Rebuttal 1:
Rebuttal: # Rebuttal
We thank all the reviewers for their detailed comments and insightful suggestions. We have incorporated additional experiments and analyses based on the recommendations.
Here, we present a concise overview of the major enhancements that have been universally implemented, focusing on:
- We provide approaches for deploying ProSST on sequence-only proteins, including using AlphaFold or ESMFold to predict structures, or using the new ProSST (MST) model trained on randomly masked structure sequences that support sequence-only input.
- We conducted additional analysis of disentangled attention: (1) The significant decline in ProSST's performance on incorrect and random structure tokens indicates that disentangled attention has learned to leverage structure tokens rather than ignore them. (2) Evidence is provided using ProSST (K=1) to show that the model's improvement is not solely due to disentangled attention. (3) The different types of attentions within disentangled attention are visualized using green fluorescent protein (GFP).
- We performed additional comparisons with more baselines, including adding ProSST (fixed) and ESM-GearNet to supervised fine-tuning downstream tasks, and EVmutation, DeepSequence, WaveNet, RITA, UniRep, ProGen2, and VESPA to zero-shot mutant effect prediction.
- We conducted additional speed analyses, including the training time of the clustering model, the time for structure tokenization, and the inference time of the Transformer model.
- Errors and spelling mistakes have been corrected.
- We reported experimental error for fine-tuning tasks with different seeds, as well as significance tests for zero-shot mutant effect prediction.
The figures are included in the attached PDF, with corresponding references provided in each of our responses. We believe that a brief overview of these additional results will provide clear context for understanding the significance of the updates we have made.
**Figure R1.** The distribution of the number of residues within 10 Å distance.
This figure explains why we choose the nearest 40 residues as the neighborhood of each node. We only build edges for two residues when their distance is less than 10 Å. According to this figure, for an arbitrary residue, the number of residues within 10 Å of it almost never exceeds 40. Therefore, we chose 40 as the threshold, which covers almost all cases.
**Figure R2.** Perplexity curves of pre-training on continuous local structure embeddings.
This figure explains why we do not use continuous local structure embeddings as input: they can cause overfitting.
**Figure R3.** Perplexity curves of ProSST (K=1) and ProSST (K=0).
We trained ProSST (K=1), where the structure tokens are replaced with a constant value of 1. This setup helps preserve the disentangled attention mechanism. Although ProSST (K=1) employs disentangled attention and ProSST (K=0) does not, their training curves show almost no difference. This result indicates that disentangled attention alone cannot enhance the model's performance without correct structure tokens.
**Figure R4.** Different types of attentions on Green Fluorescent Protein (GFP). These attentions are the average of each head in the final layer of the Transformer.
We visualize the attention learned by ProSST on GFP to investigate whether disentangled attention can learn different attention patterns. A significant difference between R2R and S2R can be observed.
**Figure 5.** pLDDT vs. Spearman of ProSST on ProteinGYM.
**Figure 6.** pLDDT vs. Spearman of SaProt on ProteinGYM.
**Figure 7.** pLDDT vs. Spearman of ESM-IF1 on ProteinGYM.
These figures show the relationship between the Spearman's performance in mutant effect prediction and the pLDDT predicted by AlphaFold. There is a positive correlation between pLDDT and model performance.
**Figure 8.** Inference speed of different sequence lengths. (Batch size=16)
We tested the inference speed of ProSST on proteins of different lengths using a batch size of 16 on a server equipped with two Intel 8468C processors and a 3090 GPU.
**Figure 9.** Training and validation curves of the structure encoder.
We present the training and validation loss during the training process of the structure encoder, showing that both losses are stable.
Pdf: /pdf/fd4a908d0c138ff9711270b885a4dbf024c803bc.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Taming Heavy-Tailed Losses in Adversarial Bandits and the Best-of-Both-Worlds Setting | Accept (poster) | Summary: This paper studies multi-armed bandits (MAB) with heavy-tailed losses.
In the heavy-tailed bandits literature, two common assumptions are made about the losses: either a known upper bound $u$ on (1+$v$)-th raw moments (where both/either $v$ and/or $u$ are known) or truncated non-negativity (or non-positivity) assumption.
This paper considers the former and proposes a Best-of-Both-Worlds (BOBW) policy based on the Online Mirror Descent (OMD) and detect-switch framework.
The authors present high-probability near-optimal regret bounds that can provide meaningful expected regret analysis.
The main techniques used in their proofs rely on variants of Freedman's inequality (Lemma 11 and 12).
- - -
After rebuttal:
The proof for the modified algorithm seems correct, though it introduces an additional logarithmic factor.
While I believe a better result could potentially be achieved using the general FTRL framework in BOBW research instead of the switch-detect framework, further improving the current analysis could be explored as a separate research direction.
As a result, I am changing my score from 3 to 6 (and the soundness rating from 1 to 3).
Strengths: ### Quality and Clarity
The main manuscript is clearly written and easy to follow.
### Originality and Significance
This paper provides high-probability regret bounds for both stochastic and adversarial settings.
The proposed policy is novel as it utilizes both OMD and a switch-detect policy specifically for (stochastic) heavy-tailed bandits.
Additionally, the results imply the achievability of BOBW performance in a pure local differential privacy setting.
Weaknesses: ### Assumption on losses
Although the authors claim that their assumption, which requires knowledge of both $v$ and $u$, relaxes the undesirable truncated non-negative losses considered in Huang et al. (2022) or non-positivity assumption in Genalti et al. (2024), I disagree with this claim.
Genalti et al. (2024) showed that there is **no** $u,v$-adaptive algorithm that can achieve the lower bound for known $u,v$ setting.
I believe that these assumptions cannot be directly compared to determine which is generally weaker **in practice**.
However, the truncated assumption appears to be more challenging problem in this context.
### Proof of Lemma 2 - step 2 (p. 17)
In the equation between (25) and (26), the authors use $\bar{w} = \rho /2$, which is **incorrect** since $\rho = 2/w$ not $2/\bar{w}$ as shown in Algorithm 1.
Therefore, the results in (26) and (27) are not necessarily true.
This implies that the authors should modify the choice of $\eta$, which was designed to cancel terms related to the results in (27).
This issue seems very critical and needs to be addressed, even if it might not change the order of regret.
### Proof for adversarial setting (Alg. 2)
By definition, for $s \leq t\_{sw}-1$, none of the tests (6), (7), and (8) should be satisfied.
This implies that for all $i\in [K]$, (6) should not be satisfied, i.e., for all $i \in [K]$,
$$
\begin{equation*}
| {\frac{\hat{L}\_{s,i}}{s} - \hat{\mu}\_{s,i}} | > 9u(\cdot)^{v/(1+v)} + 1[i \in A\_s] \text{Width}(s)/s + 1[i \not\in A\_s] \text{Width}(\tau\_i)/\tau\_i.
\end{equation*}
$$
However, equation (69) used exactly the opposite condition.
Therefore, the arguments related to bounding Part A seem incorrect.
Also, what is $\hat{L}\_{t\_{sw}-1}'$?
Is it different from the cumulative importance-weighted estimate?
### Stochastic results
From (64), the result is of order $K\log K \log T (\log(T/\xi))^3$, not $K\log K \log T (\log(T/\xi))^2$ stated in Theorem 2.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. What is $\hat{y}\_t$ in (2)?
1. What is $t\_{quit}$ in (13)? (just $t\_{sw}$?)
2. I cannot understand why (55) holds with $\tau\$ terms inside when $t > \tau$. To be specific $(\tau\_i / K)^{(1-v)/(1+v)}$ term.
3. Why (59) holds for all $i \in [K]$ instead of $i \in A\_t$? The last step used the test (5), where only $i\in A\_t$ satisfies $\leq$.
4. Can you explain more on (63)?
5. Although Algorithm 2 just used fixed constant, $c\_1 = 6$, why it should be defined as a variable? It seems not necessary.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: I follow the whole proofs up to Appendix C.
### Minor comments
1. Step 1 in p. 16: in Algorithm 1, $\bar{w}_1$ is not defined since some necessary terms like $\eta_0$ and $\hat{\ell}_0$ are not defined. Therefore, you should define $\bar{w}_1 = w_1$ first.
Also, the last inequality of step 1 holds only when $K \leq T$. This is a very minor one, but it should be specified in somewhere in Lemma.
2. Line 629: we first some -> we first fix some?
3. Line 631: why $w\_{s,i}$ is non-decreasing? It would be true only if $i$ is activated.
4. unfinished log term appears in (48).
5. It would be better to specify that Lemma 8 is applied to derive the second inequality in (50).
6. It would be better to specify that the result of (62) is applied to derive the third inequality in (64).
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We first would like to thank Reviewer q85G for carefully reading our work and providing constructive comments, which significantly help us improve our work. We are also glad to further engage in interactive discussions with the reviewer.
> Cmt 1: Why Eq. (55) holds
Re: This is indeed a mis-calculation. After correction, Eq. (55) should be
$$\\left| \\widehat{L}\_{t,i} - \\sum_{s=1}^t \\mu\_{s,i} \\right| \\leq \\text{Width}(t)\\cdot t/ \\tau\_i ,$$ where basically the original $\text{Width}(\tau_i)$ is now replaced with $ \text{Width}(t)$.
Eq. (51) now implies that
$$ \\left|\\hat{L}\_{t,i} - \\sum\_{s=1}^t\mu\_{s,i}\\right| \\leq \\mathbb{I}\\{i\\in A\_t\\} \text{Width}(t) +\mathbb{I}\\{i\notin A_t\\} \text{Width}(t)\frac{t}{\tau_i}. $$
In other words, before the deactivation ($t\leq \tau_i$), the tightness of our control on $\\left|\\widehat{L}\_{t,i} - \sum_{s=1}^t \mu_{s,i}\\right|$ is unchanged, and after that, it becomes looser (with $\\text{Width}(\\tau\_i)$ replaced by $\text{Width}(t)$).
While this correction leads to changes in tests (6)-(8), we justify that we will still get the claimed regret guarantee (albeit an extra log factor in the adversarial regime). The **key intuition** is that, to fix this gap, we are replacing $\text{Width}(\tau_i)$ with $\text{Width}(t)$ in some steps, which is the same thing we did in the last step of Eq. (75) in the current proof. In other words, these $\text{Width}(\tau_i)$ in the analysis eventually becomes $\text{Width}(t_{\text{sw}})$, sooner or later.
We provide the complete roadmap below.
**A. New tests (6)-(8)**
The corrected tests (6)-(8) should be like:
$$\\text{test (6):} \\exists i\\in[K] \\text{\~such that\~}\\left| \\frac{\\widehat{L}\_{t,i}}{t} - \\widehat{\\mu}\_{t,i} \\right|> 9u\\left(\\log(\\beta/\\zeta)/ N\_{t,i} \\right)^{\\frac{v}{1+v}} + \\mathbb{I}\\{i\\in A\_t\\}\\frac{\\text{Width}(t)}{t} + \\mathbb{I}\\{i\\notin A\_t\\}\\frac{\\text{Width}(t)}{\\tau\_i},$$
$$\\text{test (7):} \\exists i\\notin A\_t \text{\~such that\~} (\\hat{L}\_{t,i} - \\min\_{j\\in A_t} \\hat{L}\_{t,j})/t > (c_1 + 4)\\text{Width}(t)/(\\tau_i -1),$$
$$\\text{test (8):} \\exists i\\notin A\_t \\text{\~such that\~} (\\hat{L}\_{t,i} - \\min\_{j\\in A_t} \hat{L}_{t,j})/t \leq (c_1 - 4)\text{Width}(t)/\tau_i,$$
**B. Stochastic regime**
After correcting Eq. (55) and tests (6)-(8), the bound in stochastic regime is unchanged, following the three steps in Sec. 4.2.1 (lines 274-280). That is, we are still able to show that (modified) tests (6)-(8) never fail. Then, steps 2 and 3 are not affected (step 2 is only about the round up to $\tau_i$, not after, and step 3 is only about the sampling strategy), so we get the same bound.
**C. Adversarial regime**
The key is whether the modified tests will impact the analysis and the guarantee in the adversarial regime, which we will show below is not the case.
**C.1. Regret decomposition**
To get the regret decomposition in Eq. (13), it is crucial to show that Eq. (66) is still true. With the modified tests, we can still show that (again with $\text{Width}(\tau_i-1)$ replaced with $\text{Width}(t_{\text{sw}}-1)$):
$$\sum_{s=1}^{t_{\text{sw}}-1} \mu_{s,i} - \sum_{s=1}^{t_{\text{sw}}-1} \mu_{s,I^*_{t_{\text{sw}}-1}} > (c_1-6) \frac{t_{\text{sw}}-1}{\tau_i-1} \text{Width}(t_{\text{sw}}-1) \geq 0.$$
Therefore, the regret decomposition in Eq. (13) still holds. The next step is to bound parts A, B, and C therein.
**C.2. Parts A and B**
With the modified tests, now we have
$\text{part A} = O\left(\left(\frac{\log(\beta/\zeta)}{N_{t_{\text{sw}}-1,i}}\right)^{\frac{v}{1+v}} + \frac{\text{Width}(t_{\text{sw}}-1)}{\tau_i-1} \right),$
and
$\text{part B} = O\left(\text{Width}(t_{\text{sw}}-1)/(\tau_i -1) \right).$
Both of them have the original $\text{Width}(\tau_i-1)$ replaced with $\text{Width}(t_{\text{sw}}-1)$.
**C.3. Part C**
Since part C is related to action $i^*_{t_{\text{sw}}-1}$ only, which is active in round $t_{\text{sw}}-1$ as shown in C.1 above, it is not affected, and we still have
$$\text{part C} = O\left(\text{Width}(t_{\text{sw}}-1)\right)$$
**C.4. Final calculation**
Combing three terms, now Eq. (75) becomes
$$\\sum\_{s=1}^{t\_{\\text{sw}}-1}\\mu\_{s,a\_t} - \\sum\_{s=1}^{t\_{\\text{sw}}-1}\\mu\_{s,i^*} = O\\left(\\sum\_{i=1}^K N\_{t\_{\\text{sw}}-1,i} \\cdot u\\left( \\frac{\\log(\\beta/\\zeta)}{N\_{t\_{\\text{sw}}-1,i}} \\right)^{\\frac{v}{1+v}} \\right)+ O\\left( \\underbrace{\\sum\_{i=1}^K N\_{t\_{\\text{sw}}-1,i}\\frac{\\text{Width}(t\_{\\text{sw}}-1)}{\\tau_i -1}}\_{\\text{* term}} \\right) + O\\left( \\text{Width}(t\_{\\text{sw}}-1) \\right).$$
The first and third term are the same as original ones (and can be well bounded), and in the second term (* term), the previous $\text{Width}(\tau_i-1)$ becomes $\text{Width}(t_{\text{sw}}-1)$, which will be our focus below.
Applying Lemma 6 to $N\_{t\_{\\text{sw}}-1,i}$ and expanding $\text{width}(\cdot)$, the * term is bounded by
$$O\\left( \\sum_{i=1}^K\\left( q_i \\tau_i(1+\\log T)\\right)\\frac{u K^{\\frac{v}{1+v}} (t_{\\text{sw}}-1)^{\frac{1}{1+v}} \\left(\\log(\\beta/\\zeta) \\right)^{\\frac{3v}{3v+1}}}{\\tau_i -1} + \\sum_{i=1}^K\\log(\\beta/\\zeta)\\frac{u K^{\\frac{v}{1+v}} (t_{\\text{sw}}-1)^{\\frac{1}{1+v}} \\left(\\log(\\beta/\\zeta)\\right)^{\\frac{3v}{3v+1}}}{\\tau_i -1}\\right).$$
The first part is not new and is bounded using $\\sum_{i=1}^K q_i = O(\\log K)$, and the second part (which used to be a lower-order term) is also under control since $\\tau_i \\geq K+1$ due to the initialization in Algorithm 2.
Finally, now the bound becomes
$$\\sum_{s=1}^{t_{\\text{sw}}-1} \\mu_{s,a_t} - \\sum_{s=1}^{t_{\\text{sw}}-1} \\mu_{s,i^*} = O\\left( (\\log K) (\\log T) u K^{\frac{v}{1+v}} (t_{\\text{sw}}-1)^{\\frac{1}{1+v}} \\left(\\log(\\beta/\\zeta)\\right)^{1 + \\frac{3v}{3v+1}} \\right).$$
After correction, it has an extra $ \\left( \\log(\\beta/\\zeta) \\right)$ factor, which is coming from the second part of the * term.
---
Rebuttal 2:
Title: Response to Reviewer q85G (2/3)
Comment: > Cmt 2: Comparision with Huang et al. (2022) and Genalti et al. (2024)
Re: We agree that: these two assumptions (i.e., 1) truncated non-negative losses and 2) the knowledge of $u,v$) are in general incomparable to say which one is weaker. However, we would like to clarify that the setup we study in this work is to achieve the **BOBW** regret guarantee with the knowledge of $u,v$. Under this setup, [1] achieved the optimal BOBW guarantee with the additional truncated non-negative loss assumption, and the goal in our work is to still achieve the BOBW guarantee, while removing this assumption.
While there are $(u,v)$-adaptive algorithms proposed in [1, 2], they handle **one single regime only** (adversarial regime in [1] and stochastic regime [2]) and even require the truncated non-negative loss assumption. In other words, there is no result achieving $(u,v)$-adaptive BOBW guarantee. Therefore, these adaptive results are not comparable to ours.
> Cmt 3: Proof of Lemma 2 - step 2 (p. 17)
Re: Thanks for catching this. This is indeed a gap in our analysis, as $\\bar{w}_t$, obtained from the OMD update, is not controlled by $\\rho_t$.
We propose a fix to this, which is an update rule change in Line 7 of our Algorithm 1. That is, instead of first performing OMD update over the entire probability simplex to get $\\bar{w}_t$ and then getting $w_t$, we directly perform OMD update over the **truncated** simplex $\\Omega’$ as in the original paper [3], that is
$$w\_{t+1} = \\text{argmin}\_{x\\in \\Omega'} \\left( \\langle x,\\hat{\\ell}\_t \\rangle + D\_{\\phi\_t} (x, w_t)\\right),$$
where $\\Omega’:=\\{x\\in \\Omega: x(i) \\geq \\lambda/K, \\forall i\\in [K]\\}.$
By doing this, the $\bar{w}\_t$ in the the analysis will become $w\_t$ (which is controlled by $\rho\_t$) and the entire proof will become the same as that of [3], where there’s no longer mismatching between $w\_t$ and $\bar{w}\_t$, and the additional regret due to truncating the simplex in the update can be bounded as in our Lemma 3 (line 588), given that $\lambda$ is small enough. One potential disadvantage of doing so is that it’s unclear to us how to perform the efficient implementation (as mentioned in line 219) when the OMD update is over the **truncated** simplex $\Omega’$ (which was also our motivation to modify the original update rule in [3]).
> Cmt 4: Proof for adversarial setting (Alg. 2)
Re: Thanks for catching this typo. The $\leq$ in Eq. (6) (Line 12 in Algorithm 2) should be $>$.
> Cmt 5: What is $\\widehat{L}_{t\_{\\text{sw}-1}}'$?
Re: It should be $\\widehat{L}\_{t\_{\\text{sw}-1}}$, the cumulative IW estimate.
> Cmt 6: The bounds do not match between Eq. (64) and Theorem 2.
Re: Thanks for catching this. The correct bound should follow Eq. (64) from the proof, which has an additional logarithmic factor.
> Cmt 7: $\\hat{y}_t$ in (2)
Re: It should be $\hat{\ell}_t$, the loss estimate sequence.
> Cmt 8: $t_{\text{quit}}$ in (13)
Re: $t_{\text{quit}}$ was meant to be $t_{\text{sw}}$.
> Cmt 9: Why Eq. (59) holds for all actions in $[K]$
Re: According to lines 657-658, for any action $i$ deactivated at round $\\tau\_i$, test (5) must fail in round $\\tau\_i-1$ (and is satisfied in round $\\tau\_i$). To derive Eq. (59), we are looking at round $\tau_i-1$ for each action $i$, not round $t$. We will improve the writing of this part.
> Cmt 10: $c\_1=6$ seems unnecessary
Re: Our intention was to separate free constants (namely, $c\_1\\geq 6$ which can be freely chosen) from the others (some “2” coming from the proof). As a revision in Alg. 2, we will write $c\_1 \\geq 6$ in the **"Input:"** rather than $c\_1 = 6$ in the **"Define:"**
> Cmt 11: Line 629: first some -> first fix some?
Re: Yes, thanks for spotting this typo.
> Cmt 12: Line 631: why $w\_{s,i}$ is non-decreasing?
Re: Correct. $w\_{s,i}$ is non-decreasing only up to round $s =\\tau\_i$. We will refine this statement. The proof after it does not rely on increasing $w\_{s,i}$ after round $\tau\_i$.
> Cmt 13: unfinished log term appears in (48).
Re: Thanks for catching this. It was meant to be $\log(2T/\zeta)$ according to the last line in Page 22.
> Cmt 14: It would be better to specify that Lemma 8 (resp. (62)) is applied to derive (50) (resp. (64)).
Re: Thanks for the suggestion. We will expand the explanations of key steps in our proof (including these two and more) to improve the readability.
> Cmt 15: Some notations are not well-defined in Algorithm 1. $K\leq T$ is not specified.
Re: Thanks for the suggestions. Yes, initializing $\\bar{w}\_1$ is necessary in the current Algorithm 1, and we assume that $K\\leq T$. We will explicitly specify this assumption when we introduce the problem setup.
---
Rebuttal Comment 2.1:
Comment: First, I would like to express my appreciation to the authors for their detailed explanations and modifications to the proofs.
Overall, the revised algorithms and their proofs appear correct. Although the introduction of an additional logarithmic factor may seem concerning, it is acceptable within the context of the original definition of BOBW, where such a factor is also allowed.
My only remaining concern is the computational efficiency of calculating the refined $w\_{t+1}$ which now includes the new constraint $w\_{t+1,i} \geq \lambda /K$ due to truncated simplex.
---
Rebuttal 3:
Title: Response to Reviewer q85G (3/3)
Comment: > Cmt 16: Explain more on (63).
Re: According to Line 14 of Algorithm 2, the probability of pulling a deactivated arm keeps decaying since its deactivation, and all active arms equally share the remaining probability mass. For the first action to be deactivated (denoted by $1'$), $q_{1'}$ is exactly $1/K$. For the second one $(2')$, we clearly have $q_{2'}\leq \frac{1}{K-1}$. So in general, we have $q_{i'}\leq \frac{1}{K-i'+1}$, and
$$\\sum\_{i=1}^K q\_i = \\sum\_{i'=1}^K q\_{i'} \\leq \\sum\_{i'=1}^K \\frac{1}{K-i'+1}.$$
We will attach the explanation to it.
Referrences
[1] Huang, Jiatai, Yan Dai, and Longbo Huang. "Adaptive best-of-both-worlds algorithm for heavy-tailed multi-armed bandits." international conference on machine learning. PMLR, 2022. https://proceedings.mlr.press/v162/huang22c.html
[2] Genalti, Gianmarco, Lupo Marsigli, Nicola Gatti, and Alberto Maria Metelli. "$(ε, u) $-Adaptive Regret Minimization in Heavy-Tailed Bandits." In The Thirty Seventh Annual Conference on Learning Theory, pp. 1882-1915. PMLR, 2024. https://proceedings.mlr.press/v247/genalti24a.html
[3] Lee, Chung-Wei, Haipeng Luo, Chen-Yu Wei, and Mengxiao Zhang. "Bias no more: high-probability data-dependent regret bounds for adversarial bandits and mdps." Advances in neural information processing systems 33 (2020): 15522-15533. https://proceedings.neurips.cc/paper_files/paper/2020/hash/b2ea5e977c5fc1ccfa74171a9723dd61-Abstract.html
---
Rebuttal 4:
Title: Regarding the computational efficiency
Comment: We are glad to see that our responses successfully addressed the concerns and questions raised in the initial review, and we thank the reviewer again for carefully reading our submission (as well as the rebuttal) and providing valuable feedback. Please feel free to let us know if there are any other questions/comments.
Regarding the computational efficiency of the new update rule over the truncated simplex (i.e., the one proposed in [3]), we would like to clarify that it can still be solved efficiently (i.e., in polynomial time): since both the objective function and the domain (namely, the truncated simplex) are convex, the update is (still) a convex optimization problem, for which there are well-developed solvers.
What we meant in the initial response to Comment 3 is that, when the simplex is not truncated, there is an even more efficient way to perform the OMD update (which is to find the Lagrangian multiplier corresponding to the equality constraint via line search), which (to our understanding) is no longer applicable due to the additional constraints/multipliers introduced by the simplex truncation. We didn't mean that updating over the truncated simplex becomes "computationally inefficient" in the general sense.
We hope this addresses your concern. | Summary: This paper considers the bandit problem for heavy-tailed losses and proposes an algorithm that achieves a nearly tight high-probability regret bounds.
The proposed algorithm has a best-of-both-worlds guarantee, i.e., it achieves nearly tight bounds in both adversarial and stochastic settings.
The proposed approach is also shown to be useful in terms of local differential privacy.
Strengths: - This study bypasses the assumption of truncated non-negative losses, which was required in the previous study by Huang et al. (2022).
- This paper shows high-probability regret bounds, which is rare in the contexts of heavy-tailed bandits and best-of-both-worlds algorithms.
- Obtained regret upper bounds are almost tight.
Weaknesses: - The proposed algorithm requires prior knowledge of the parameters $u$ and $v$. This is a weakness when compared to algorithms that are adaptive to these parameters, e.g., by Huang et al.(2022) and Genalti et al. (2024).
- There is a lack of mention or analysis of intermediate settings between stochastic and adversarial settings, e.g., corrupted environments.
A number of best-of-both-worlds algorithms are also effective in these settings (e.g., (Lee et al., 2020), (Zimmert and Seldin, 2021), (Dann et al., 2023)).
Technical Quality: 3
Clarity: 3
Questions for Authors: How tight is the bound of the proposed algorithm in terms of its dependence on $\log T$?
Are there any known comparable lower bound?
Confidence: 2
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The limitations and potential negative societal impact are adequately addressed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > Comment 1: The proposed algorithm requires prior knowledge of the parameters $u$ and $v$. This is a weakness when compared to algorithms that are adaptive to these parameters, e.g., by Huang et al.(2022) and Genalti et al. (2024).
Re: While there are $(u,v)$-adaptive algorithms proposed in [1, 2], they handle **one single regime only** (adversarial regime in [1] and stochastic regime [2]) and hence are not directly comparable to our BOBW result. Moreover, these adaptive algorithms require the truncated non-negative loss assumption.
The setup we consider in this work is the BOBW setting with the knowledge of $u$ and $v$. Under this setup, the **only** existing result is from [1], which is optimal in both regimes and requires the truncated non-negative loss assumption. Compared to that, we still achieve BOBW guarantee but try to get rid of the additional assumption.
> Comment 2: There is a lack of mention or analysis of intermediate settings between stochastic and adversarial settings, e.g., corrupted environments. A number of best-of-both-worlds algorithms are also effective in these settings (e.g., (Lee et al., 2020), (Zimmert and Seldin, 2021), (Dann et al., 2023)).
Re: That is a good point. Online Learning-based algorithms naturally ensure regret guarantee in the corrupted regime through advanced and elegant analysis, and there is no existing result on achieving regret guarantee in the corrupted regime using the detect-switch framework. It is unclear whether/how the detect-switch framework can do that (by, e.g., explicitly detecting the degree of corruption $C\in[0, \Theta(T)]$). This is indeed an informative remark we may consider adding to future versions.
> Comment 3: How tight is the bound of the proposed algorithm in terms of its dependence on $\log T$? Are there any known comparable lower bounds?
Re: There are two main sources of $\log T$ factors in our work, including 1) the log-barrier regularizer and 2) high-probability guarantees (from concentrations). For the latter one, we mean that the $\log T$ factor is inevitable for a high-probability guarantee, even considering the adversarial regime only, as shown in [3].
In terms of in-expectation regret, the worst-case lower bound in the adversarial regime is $\Omega(u K^{\frac{v}{1+v}} T^{\frac{1}{1+v}})$, and the gap-dependent lower bound in the stochastic regime is $\Omega(\sum_{i\neq i^*}\frac{\log T}{(\Delta_i)^{1/v}})$ as we mentioned in the paper.
In terms of high-probability regret, when heavy tails are involved, (to our knowledge) there is no regret lower bound showing the refined dependency on $\log T$.
References
[1] Huang, Jiatai, Yan Dai, and Longbo Huang. "Adaptive best-of-both-worlds algorithm for heavy-tailed multi-armed bandits." international conference on machine learning. PMLR, 2022. https://proceedings.mlr.press/v162/huang22c.html
[2] Genalti, Gianmarco, Lupo Marsigli, Nicola Gatti, and Alberto Maria Metelli. "$(ε, u) $-Adaptive Regret Minimization in Heavy-Tailed Bandits." In The Thirty Seventh Annual Conference on Learning Theory, pp. 1882-1915. PMLR, 2024. https://proceedings.mlr.press/v247/genalti24a.html
[3] Gerchinovitz, Sébastien, and Tor Lattimore. "Refined lower bounds for adversarial bandits." Advances in Neural Information Processing Systems 29 (2016). https://proceedings.neurips.cc/paper_files/paper/2016/hash/2f37d10131f2a483a8dd005b3d14b0d9-Abstract.html | Summary: This work studied the adversarial bandit problem with heavy-tailed distribution. It also studied the best-of-both-world setting. It relaxed the non-negative assumption and analyzed near-optimal algorithms.
===================
I would like to keep the score after reading the rebuttal.
Strengths: 1. This work discussed the earlier literature in detail.
1. It highlighted the technical challenges.
1. Overall, I think this work provided a good set of results and explained them well.
Weaknesses: 1. Is it possible to bound the expected regret of the proposed algorithm? What is the limitation?
1. As mentioned in Section 2, there are two possible definitions of regret in the adversarial setting. The author(s) should clarify which definition is used in the related works.
1. A table comparing all results is appreciated.
Technical Quality: 3
Clarity: 3
Questions for Authors: Please refer to the 'Weaknesses' part.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: NA
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > Comment 1: Is it possible to bound the expected regret of the proposed algorithm? What is the limitation?
Re: We believe that the reviewer is asking whether it is possible to bound the stronger regret $\mathbb{E}[\bar{R}_T]$ defined in line 147 in the adversarial regime. We now state the challenge of that in heavy-tailed bandits.
By definition, we would like to bound
$$\\mathbb{E}[\\bar{R}\_T] = \\mathbb{E}[ \\mathbb{E}[\\sum\_{t=1}^Y \\langle w\_t - y\_{i^*}, \\ell\_t \\rangle]|\\ell\_1,\\dots,\\ell\_T].$$
Note that here $i^*:=\\text{argmin}\_i \\sum\_{i=1}^T \\ell\_{t,i}$, which depends on the realization of loss sequence. Looking at the inner expectation, the quantity $\sum_{t=1}^Y \langle w_t - y_{i^*}, \ell_t \rangle$ depends on not only the policy ($w_t$), **but also** the realization (the scale of losses). In other words, as long as the losses have large scale, the absolute quantity of regret is also large, even if the learning algorithm is good.
Therefore, one potential solution is to bound $\mathbb{E}[\bar{R}_T]$ hopefully thanks to the outer-level expectation (i.e., the realizations cannot be always very large due to the heavy tail definition). It is unclear to us whether this can be well bounded. Notably, this was also our intuition for why it is reasonable to consider pseudo-regret in heavy-tailed bandits: strong regret could be no longer meaningful as a performance metric.
> Comment 2: As mentioned in Section 2, there are two possible definitions of regret in the adversarial setting. The author(s) should clarify which definition is used in the related works.
Re: Thanks for the suggestion. We will clarify this in future versions. Roughly speaking, in the heavy-tailed case, both [1] and us consider pseudo-regret (as we explained in our Remark 2), and in the bounded-loss case, the stronger regret is typically considered and can be handled.
> Comment 3: A table comparing all results is appreciated.
Re: We thank the reviewer for the suggestion. We will consider adding a table to summarize the existing results for a clear comparison between our work and previous ones.
References
[1] Huang, Jiatai, Yan Dai, and Longbo Huang. "Adaptive best-of-both-worlds algorithm for heavy-tailed multi-armed bandits." international conference on machine learning. PMLR, 2022. https://proceedings.mlr.press/v162/huang22c.html
---
Rebuttal Comment 1.1:
Comment: Thanks for your response. I would like to keep the score. | Summary: In this work, the authors consider a best-of-both-worlds multi-armed bandits problem where the losses are not bounded and instead are heavy-tailed.
To be precise, in the stochastic setting, losses are generated from fixed distributions.
In the oblivious adversarial setting, the losses are drawn from distributions that are generated arbitrarily and can change from one round to the next.
In both cases, these distributions can sample heavy-tailed losses, which are defined such that the $1 + v$^{th} moment of the losses are bounded by $u^{1 + v}$ for some constants $u > 0$ and $v \in (0, 1]$.
This setting is more challenging than the standard BOBW approach because of the generation process of the losses, which can lead to negative unbounded losses but also because the performance is evaluated in terms of expected regret rather than in terms of pseudo-regret, which is a more challenging measure.
The authors propose and analyze an algorithm based on the FTRL framework. They discuss that simply using regularization with Tsallis entropy (which is the state of the art for the standard BOBW MAB problem) cannot handle large negative losses and instead rely on the log barrier regularizer, which is a standard approach to handle problems where the stability is difficult to bound.
While this approach itself is sufficient to derive bounds in the adversarial regime, achieving BOBW bounds requires supplementary tricks:
They then use a detect-switch strategy to monitor whether the environment is stochastic, in which case the arms are sampled at a rate that ensures stochastic guarantees, and otherwise make a definitive switch to the adversarial regime and use the previously discussed FTRL with log-barrier regularization.
Strengths: The authors tackle a challenging problem of BOBW bandits with heavy tail losses and propose a solution that achieves near-optimal results in both the adversarial and the stochastic regime.
The proposed methods combine well-studied methods in the BOBW literature and this paper highlights another topic of interest for the FTRL with log barrier framework.
The authors provide a detailed analysis of their methods and provide a very detailed explanation of their choices of methods.
Weaknesses: While the presented results are novel and interesting, the proposed method is suboptimal by several logarithmic factors both in the stochastic and in the adversarial framework. Both the detect-switch method and FTRL with log barrier are known to be suboptimal in the standard BOBW MAB problem with bounded losses, which means that improving upon the existing results might require a completely different approach.
Technical Quality: 3
Clarity: 3
Questions for Authors: Do you think that the detect-switch framework is necessary to achieve these BOBW methods or whether some more straightforward method (like FTRL with 1/2 Tsallis-Inf for the standard BOBW MAB with bounded losses)?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: This work is purely theoretical and the limitations of the applicability of the results are properly detailed in the conditions of each theorem.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > Comment 1: Both the detect-switch method and FTRL with log barrier are known to be suboptimal in the standard BOBW MAB problem with bounded losses, which means that improving upon the existing results might require a completely different approach.
Re: We agree that both log-barrier (for adversarial bandits) and detect-switch method (for BOBW) do not achieve the optimal regret (i.e., suffering extra $\\log T$ factors) even with bounded losses, although to our knowledge, currently they are the only known approaches that are promising to handle heavy-tailed bandits in BOBW (and previous approaches need additional assumptions), and how to narrow the gap towards the optimal regret is still largely open, for which a totally different algorithm design seems to be necessary.
> Comment 2: Do you think that the detect-switch framework is necessary to achieve these BOBW methods or whether some more straightforward method (like FTRL with 1/2 Tsallis-Inf for the standard BOBW MAB with bounded losses)?
Re: As we discussed in the last paragraph in the main body, it is largely unknown whether purely online algorithms (e.g., FTRL) alone can achieve BOBW regrets in heavy-tailed bandits. Even considering the adversarial regime only, log-barrier seems to be necessary, so one promising direction is to show that OMD/FTRL alone (without detect-switch framework) with log-barrier can achieve BOBW guarantee, although involved theoretical analysis is expected.
---
Rebuttal Comment 1.1:
Comment: Thank you for your comments, particularly the detailed discussion you had with reviewer q85G. I don't have any further questions at this point. | null | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
DigiRL: Training In-The-Wild Device-Control Agents with Autonomous Reinforcement Learning | Accept (poster) | Summary: This paper shows the potential of Reinforcement Learning (RL) for designing an effective digital agent for in-the-wild control through Graphical User Interfaces (GUIs). The proposed approach relies on the advantage of the pre-trained visual language models (VLMs) while tackling real-world stochasticity by training an RL agent that interacts with an environment instead of relying on static demonstrations. Accordingly, this work proposes a novel autonomous RL approach, namely DigiRL, for training device control agents, which consists of two stages: an offline RL phase where the agent is trained on static demonstrations, followed by an in-the-wild, offline-to-online RL stage for training the agent through interacting with an environment. Consequently, this work also introduces a scalable and parallelizable Android learning environment with a reward model (evaluator) based on a robust VLM-based model. To show the effectiveness of the proposed method, an evaluation of different tasks given diverse instructions is carried out from the Android in the Wild dataset on real Android device emulators. The results show a significant improvement of DigiRL compared to the existing state-of-the-art agents, including wrapped proprietary VLMs such as GPT-4V and Gemini 1.5 pro. The paper claims to be the first to succeed in developing an autonomous offline-to-online RL approach to enable state-of-the-art performance on device control problems.
Strengths: - The usage of RL in designing a successful digital agent for a device control task is fascinating.
- I appreciate implementing such a scalable Android learning environment, and I hope the authors will open-source everything so that other researchers can reuse it.
- The clarity of the paper is worth mentioning.
- The experimental results section is rich, especially the ablation studies.
Weaknesses: - Although the POMDP definition sounds correct when defining the problem, a contextual POMDP is even more appropriate for such a problem [1].
- A pseudo-code or an illustrative diagram should be added to facilitate understanding the method.
- It is not clear how the policy and the value network are conditioned given the context $c$. (implementation-wise)
[1] Hallak, Assaf, Dotan Di Castro, and Shie Mannor. "Contextual markov decision processes." arXiv preprint arXiv:1502.02259 (2015).
Technical Quality: 3
Clarity: 3
Questions for Authors: - How are the policy and the value network conditioned given the context $c$?
- Is the RL agent trained in a multi-task fashion (which I believe is true)? I mean the agent is trained with more than one task concurrently accessing fully or partially the same models.
- If yes, do you think classical Multi-task learning or Multi-task reinforcement learning approaches would enhance the performance even more?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: I believe the authors discussed the method's limitations in the final section. I agree with the authors regarding the impact of such application on the economy, society, and privacy, and that needs careful review in the future to limit any harm.
Flag For Ethics Review: ['Ethics review needed: Safety and security']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your review and feedback on the paper. We provide responses to the questions raised below that we will also include in the updated version of the paper. We commit to open sourcing open-source the code, environment, checkpoint, and the data. In this rebuttal period, we provide an anonymous link to our code: (link sent via a private message to AC per rebuttal instruction), and will make this public with the final version.
Thanks so much for your appreciation of our work that “the usage of RL in designing a successful digital agent for device control tasks is fascinating”! **Please let us know if your concerns are addressed and if so, we would appreciate it if you might be willing to upgrade your score.** We answer your questions below:
___
### “It is not clear how the policy and the value network are conditioned given the context c (implementation-wise)”
Implementation-wise, we use a 1B VLM AutoUI for the policy network so the context (in our case the task to complete) is directly included in the language input / encoder module of the VLM. For the value network, we encode the image with a CLIP encoder and the context with 110M BERT-base encoder for computational efficiency reasons. After this, an additional MLP layer is added on top of CLIP and BERT encodings concatenated with each other. We will include this clarification in the implementation detail section of a revised version of the paper.
___
### “A pseudo-code or an illustrative diagram should be added to facilitate understanding the method.”
Thanks for the suggestion. We have made an illustrative diagram to facilitate readers to understand the method in Figure 2 of Rebuttal PDF. In this diagram, we have an instruction-level value function and a step-level value function, both of which serve as “filters” to identify the “advantageous” data that the agent should try to train on with AWR. We will include this diagram in an updated version of the paper!
___
### “Is the RL agent trained in a multi-task fashion (which I believe is true)? I mean the agent is trained with more than one task concurrently accessing fully or partially the same models.”
Yes, the RL agent is trained in a multi-task fashion where at the start of each episode a random task (e.g. “Find the nearest place to buy a beach umbrella” and “Go to ebay.com, search for logitech 950” ) is drawn out of a task pool containing 200 tasks.
___
### “If yes, do you think classical Multi-task learning or Multi-task reinforcement learning approaches would enhance the performance even more?”
This is a good idea! In fact, we believe that DigiRL is an effective starting point for building effective device control agents and researching RL algorithms for training agents. We are now ourselves building on DigiRL, to devise better RL approaches for training agents in this environment. Your suggestion is valuable for our exploration and we will study multi-task RL approaches such as PC-Grad [1], Task Grouping [2], etc to further enhance performance of DigiRL.
___
### “Although the POMDP definition sounds correct when defining the problem, a contextual POMDP is even more appropriate for such a problem”
Thanks for pointing out this definition! We will post the device control problem in Section 3 as a contextual POMDP as it is indeed more appropriate. Note that this does not change any of the training objectives as it is largely a notational change. We will include an additional distribution over contexts $c$ when we define the POMDP.
[1] Yu, Tianhe, et al. ‘Gradient Surgery for Multi-Task Learning’. arXiv [Cs.LG], 2020, http://arxiv.org/abs/2001.06782. arXiv.
[2] Fifty, Christopher, et al. ‘Efficiently Identifying Task Groupings for Multi-Task Learning’. arXiv [Cs.LG], 2021, http://arxiv.org/abs/2109.04617. arXiv.
---
Rebuttal 2:
Title: Link to annonymous code
Comment: https://anonymous.4open.science/r/digirl-anonymous-7ED0/
Here is the link to our code promised for Reviewer zDzF
---
Rebuttal 3:
Title: Rebuttal
Comment: Dear Authors,
Thanks a lot for answering my questions and addressing my concerns!
Given the authors' responses to my questions and those of other reviewers, I will increase my score from 6->7 while keeping my confidence level, as it is similar to Reviewer 251X; my background is RL.
---
Rebuttal Comment 3.1:
Comment: We thank the reviewer for the response. We appreciate your score increase! | Summary: This paper introduces a novel autonomous reinforcement learning (RL) approach, DigiRL, for training in-the-wild device control agents. DigiRL first employs offline RL to fine-tune a pre-trained vision-language model (VLM as the agent) using stale task-specific data, and then further refines the agent through online RL by continuously interacting with parallelized emulators. DigiRL achieves a 49.5% absolute improvement in task success rate over existing state-of-the-art agents, establishing a new benchmark for digital agents in device control.
Strengths: 1. The paper is well-written and motivated, many important technical/implementation details are covered.
2. The paper considers a challenging problem, autonomous device control, where existing LLM-based methods struggle to achieve acceptable success rate. The proposed method leverages VLM and RL techniques and significantly improves compared to these baselines.
3. The experiments are comprehensive and informative, covering LLM/RL agents, prompting and learning paradigms, offline and off-to-on RL, as well as failure modes analysis.
4. The authors implement a multi-machine emulator system to support parallel and real-time training of online RL.
Weaknesses: Major Points:
1. From a ML methodological point of view, the novelty/contribution of the paper is limited. To perform offline and off-to-on RL, the paper adopts a number of existing techniques such as AWR, doubly-robust estimators with little customization (e.g., hard filtering on the advantages instead of computing $\exp(A)$, which is mainly indended for easier implementation), all well-known to the community. The only thing seems "new" is training value functions with cross-entropy losses, also directly taken from [1], and the equations in line 250-251 seems to be qustionable (see my comments in the Questions section). Moreover, no theoretical insight is provided to elucidate why these specific designs are chosen.
[1] Stop regressing: Training value functions via classification for scalable deep rl, 2024.
2. Limited Scope. The entire paper focuses on a very specific domain (autonomous device control). The scope of the proposed method might be too narrow to be of general interest to the ML/RL community.
Minor Points:
- In section 4.2, how to properly balance the two estimators, one with higher variance and one with higher bias to achieve the optimal result? What's the hyperparameter profile of the combined estimator? Have you tried any alternative designs and can you give theoretical insight to justify this specific design choice?
- Regarding the offline and off-to-on RL setting: to my knowledge, the main point of offline RL is to leverage a large body of stale data to safely and efficiently pretrain a RL agent. Therefore, for off-to-on RL, where online RL operates as the fine-tuning stage, one should use data far less than the offline pretrained dataset to ensure the setting is meaningful. The fact that authors intentionally use the same amount of data for both offline and online stage, which assumes access to a large amount of online data might make the offline pretraining unecessary. To see this, I recommend the authors to directly perform online RL on the combined dataset and it's highly possible that such "purely online" agent outperforms its off-to-on counterpart.
- The authors spend quite a few words discussing the challenges of stochasticity and device control as a POMDP. However, I do not see any specific design or techincal contribution targeting such problems.
Technical Quality: 2
Clarity: 3
Questions for Authors: 1. In line 250-251, the CE loss pairs r with log V and (1-r) with log (1-V). Intuitively, this means one would like to make the distribution of r and V as close as possible (when r-> 1, V -> 1 and vice versa). This seems to contradict the claim in line 240-241 that "Intuitively, if a rollout attains a high value of A(sh, ah, c), it means the value function V is small".
2. How do you perform the train/test task split, if not random split? It is odd to see in Table 1 that almost all testing performance clearly surpass the training performance (normally should be the opposite), which suggest that the testing tasks are in general easier than the training tasks and not i.i.d?
Confidence: 3
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the review! At the outset, we want to clarify our scope: our goal is to show that autonomous RL can be used to build SOTA device control agents that outperform proprietary models (Gemini/GPT-4). Our methodological contribution involves identifying and designing a good RL objective to be able to do so (robust advantages + curriculum + cross-entropy loss + AWR). We believe that our scope of device control is akin to several prior ICML/ICLR/NeurIPS papers that focus on applying ML / RL techniques to one domain (e.g., RL for chip design (ICML) [6], LLMs for web navigation (ICLR) [7], web shopping (NeurIPS) [1]), and were still judged to be valuable contributions. In fact, device control is already more general than several important agent domains (e.g., shopping, web navigation, traveling planning, etc) that have been considered individually. We therefore think that an RL approach that attains SoTA **for the first time**, in a more general setting than recent papers, should not be a ground for rejection. To address the questions, we add **new results for hyperparameter profiling, advantage design, online RL from scratch, and
stochasticity**. *Please let us know if the concerns are addressed and if so, we would be grateful if you upgrade your score.*
___
### “Limited Scope; From a ML methodological point of view, the novelty/contribution of the paper is limited.”
As mentioned, our work is already in a more general setting than work in foundation agents that appears in ICML / NeurIPS / ICLR [1,2,3,4]. We use two subsets of AitW that focus on different parts of device control with around 200 tasks each (web shopping, device management; see Tables 2, 3). For the RL community, we identify challenges in a real-world problem setting of device control (e.g., see Fig 1, Table 1 in the PDF), and design an RL approach that can efficiently learn in this environment by combining robust advantages, curriculum, cross-entropy losses & AWR. While each piece individually is not as novel, combining each piece into an effective system for a real-world, user-scale problem is our contribution. As a systems paper (that has been of interest in ML/RL [5,6,7]), we think we should be evaluated on the efficacy of our system rather than a novel RL algorithm.
___
### “Step level advantage estimator”
**Alternate designs:** We tried using a Q-function $Q(s, a)$ for computing advantages, but this leads to significantly worse and less stable results (see Table 1 of the PDF). This is because the action coverage is not high enough to properly attribute reward signals to the action instead of states.
**The theoretical justification:** follows the analysis of GAE[8], where the one-step estimator $V^{step}(s_{h+1}) + r(s_h, a_h) - V^{step}(s_{h})$ corresponds to the high-bias estimator $\hat{A}^{(1)}$ in Eq. 11 of the GAE paper and the MC reward estimator $\lambda^{H-h}r(s_H, a_H, c)$ corresponds to the high-variance estimator $\hat{A}^{\infty}$ in Eq. 15 in GAE. While GAE takes an average of a series of k-step estimators, we choose to omit the intermediate estimators for simplicity, nonetheless we enjoy similar bias-variance trade-offs. Similar to the GAE[8], the combined estimator is $\gamma$-just (Proposition 1[8]) when $V^{step}$ is accurate, i.e.
$E_{s,a \sim d^\pi}A^{step}(s,a)\nabla \pi_{\theta}(a|s) = E_{s,a \sim d^\pi}A^{\pi}(s,a)\nabla \pi_{\theta}(a|s)$, where $A^{\pi}$ is the true advantage function.
**Hyperparam tuning:** The only hyperparameter is $\lambda$, which is tuned similar to the GAE $\lambda$. We provide a result in **table 2 of 1-page PDF** ablating $\lambda$ from 0.0 to 0.9, and found DigiRL to not be very sensitive to it.
___
### Benefits of offline-to-online RL
We ran online RL from scratch in Figure 3 of the PDF. We see that our off-to-on agent smoothly transitions into the online phase, without any unlearning + results in a lower cumulative regret than online RL from scratch. Avoiding unlearning while benefitting from a better init is crucial in scenarios where the desideratum is to keep adapting the agent quickly upon deployment.
___
### Handling stochasticity and POMDP via DigiRL
To show the challenge of stochasticity and dynamism, we add Figure 1 in 1-page PDF comparing the performance of a stale offline DigiRL agent and an agent updated via online. Despite being trained with RL, performance of a stale agent decays as time moves on, whereas continually training with online DigiRL avoids this issue, hence DigiRL addresses performance drop amidst stochasticity.
We also clarify that utilizing a POMDP formulation is important because at some instants, the exact state of the device may simply be unknown (e.g., when a page is loading it is impossible to know what is loading without referring to the previous state). Our practical implementation handles this by using the history of the last two screenshots as the state of the RL agent (see Lines 199-200).
___
### Line 250-251, clarification about r, V and cross-entropy
This would not make r and V close because r is a function of both s & a, but V is only a function of the instruction. As long as the reward values for all (s,a) for a given instruction are not 1, then, $V$ will take a value smaller than 1. Concretely, $V$ is the average $r$ for all (s, a) pairs at the same instruction. Hence, this **does not** contradict the claim in 240-241: say, the agent has 10% of success rate for a particular instruction, then $V=0.1$. For successful rollouts, $A(s_h, a_h, c) = r(s_H, a_H, c) - V^\text{instruct}(c)=1-0.1=0.9$. But if the agent has 30% of success for an instruction, $A(s_h, a_h, c)=0.7$ for a successful rollout, because now $V=0.3$.
___
### How do you perform the train/test task split
We use the standard train/test task split from AitW. Train and test tasks are generated following a set of same templates such as “Go to {shopping website}, and search for {item name}”, which might allow for generalization.
---
Rebuttal 2:
Title: references
Comment: [1] ‘WebShop: Towards Scalable Real-World Web Interaction with Grounded Language Agents’. NeurIPS 2022.
[2] Mind2Web: Towards a Generalist Agent for the Web’. NeurIPS 2023.
[3] GPT-4V(Ision) Is a Generalist Web Agent, If Grounded. 2024, ICML 2024.
[4] ‘TravelPlanner: A Benchmark for Real-World Planning with Language Agents’. ICML 2024.
[5] Data-Driven Offline Optimization For Architecting Hardware Accelerators’. ICLR 2022.
[6] Chip Placement with Deep Reinforcement Learning’. ICML 2022.
[7] A Real-World WebAgent with Planning, Long Context Understanding, and Program Synthesis. ICLR 2024 (Oral).
[8] Schulman, John, et al. ‘High-Dimensional Continuous Control Using Generalized Advantage Estimation’.
---
Rebuttal 3:
Title: Discussion period ends soon
Comment: Dear reviewer 251x,
Apologies for bothering you! Since the discussion period will end in two days, we would be grateful and would sincerely appreciate if you could respond to our rebuttal, leaving us enough time to address any remaining questions.
Thanks, Authors
---
Rebuttal 4:
Comment: Thank you for the rebuttal and I appreciate the efforts of providing many new experiments in the PDF, please do add these to the revised version. I think most of my concerns are properly addressed. Regarding the technical novelty, since my expertise mainly comes from RL algorithm research, and much less from developing AI agents/systems, I might not be in the best position to make the judgement.
Nevertheless, given all information provided, I will raise my score 4->5 but lower my confidence 4->3, and vote for acceptance.
---
Rebuttal Comment 4.1:
Comment: Thank the reviewer for reading our rebuttal We are glad that our rebuttal has solved most of your concerns. We appreciate your score raise and voting for acceptance! | Summary: This paper proposes an autonomous RL approach, RL for digital agent (DigiRL), to finetune a pretrained VLM as an in-the-wild device control agent through GUI. The authors build a parallelizable Android learning environment with VLM-based evaluator to identify the key design choices for RL. The training include two stages an offline RL phase on existing data, then followed by an online RL phase by interacting with real-world graphical user interfaces using the Android learning environment. The proposed method with only 1.5B model size outperforms other state-of-the-art models such as GPT4-V or 17B CogAgent in the Android-in-the-Wild (AitW) dataset.
Strengths: - The paper is well-structured and easy to follow.
- Many design choices are well motivated.
- The experiments are nice and well support the claims.
Weaknesses: - Overall there is no major weakness. There are only several questions and potentially interesting empirical studies to look at. Check more details in the question sections.
Technical Quality: 3
Clarity: 3
Questions for Authors: - Could the authors compare more advanced LLM reasoning or planning algorithms like Chain of Thoughts (CoT), Tree of Thoughts (ToT), Reasoning as Planning (RaP), etc.?
- Following the previous question, is it possible to compare with those planning/search-based methods with the autonomous evaluator or the trained value model of DigiRL as value functions?
- What is the training time and compute requirement for online training?
- In Figure 7, what does the AWR reweighting mean? Is it simply AWR?
- With the auto-curriculum setup, it may be interesting to look at what types of data/replay are critical throughout the online learning process; with simple categorization like failure mode in Figure 5 or whatever characterization can be interesting.
As detailed in the "Challenges of stochasticity" paragraph in section 3, could the authors provide some studies on unpredictable distractor and technical glitches?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Limitations are discussed in the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your positive review and feedback on the paper. To address the raised questions, we add new results to include comparisons with **CoT based planning**, and **a state-of-the-art LLM planning algorithm for device control called AppAgent [1]**. We also provide **additional results** for the auto-curriculum setup and ablation studies illustrating the challenge of stochasticity and dynamism for device control. We also provide clarifications on the rest of the questions, and will incorporate these results and clarifications in the final version of the paper.
**Please let us know if your concerns are addressed and if so, we would appreciate it if you could upgrade your score. We are happy to discuss further.** We answer your questions below:
___
### **[New result]** “Following the previous question, is it possible to compare with those planning/search-based methods with the autonomous evaluator or the trained value model of DigiRL as value functions?”
Good question! In Table-3 of one-page PDF, we have included additional results of comparing planning/search-based methods with the autonomous evaluator. In particular, we compare with Reflexion[2] that for each task, first reflects on a trial run with the result given by the autonomous evaluator and performs a new trial with the reflection included in the prompt. We found that the use of Reflexion+autonomous evaluator can indeed enhance the performance (comparing GPT4V Reflexion+Set-Of-Marks 14.6% and GPT4V Set-Of-Marks 8.3%). However, **this approach still performs worse than our method DigiRL (14.6% compared to 67.2%)**.
___
### “Could the authors compare more advanced LLM reasoning or planning algorithms like Chain of Thoughts (CoT), Tree of Thoughts (ToT), Reasoning as Planning (RaP), etc.?”
Thanks for bringing this up. We want to clarify that several baseline results in the submission were already prompted with a chain of thought. For instance, the “Set-of-Marks” approach with GPT4V and Gemini-1.5-Pro, and CogAgent are prompted with Chain of Thought (CoT) to produce an action. With regards to a SoTA planning baseline, we remark that we also compare DigiRL to AppAgent [1] in Table 1, which is a state-of-the-art VLM Retrieval Augmented Generation (RAG) approach specifically designed for device control. In both cases, we find that **DigiRL outperforms these prior methods**, indicating the superiority of training with autonomous RL for user-level device control over planning / prompting with frozen models.
The methods discussed above constitute the state-of-the-art planning based approaches for device control. We are also happy to add any other comparisons if the reviewer has particular pointers for existing approaches that use planning / reasoning in the device-control domain.
___
### **[New result]** With the auto-curriculum setup, it may be interesting to look at what types of data/replay are critical throughout the online learning process; with simple categorization like failure mode in Figure 5 or whatever characterization can be interesting.
We have run additional experiments to understand what types of data are being trained upon during the course of online learning. In particular, we categorize the tasks in the Web Shopping subset into three levels of difficulties according to the criterion in Table 3 in appendix, and plot the learning curve for each difficulty level during the online learning process in Figure 5 in Rebuttal PDF. From this plot, we can see that the performance of difficulty 1 (easy) tasks improves significantly in the first 200 trajectories indicating that much difficulty 1 data is trained upon in the very beginning. In the later stage of training, the performance of difficulty 1 tasks stay around the same while the performance of difficulty 2 and difficulty 3 tasks increase a lot, indicating data from difficulty 2 and difficulty 3 tasks is replayed more at this stage. This indicates that the auto-curriculum helps us upweight the right trajectories to focus at different stages of training.
___
### **[New result]** As detailed in the "Challenges of stochasticity" paragraph in section 3, could the authors provide some studies on unpredictable distractor and technical glitches?
We have run an additional experiment where we take a trained DigiRL agent and continue to train it further with more online data for four days wall-clock time, even though the previous training run that produced that agent had already converged. As shown in Figure 1 in the 1-page PDF, as more time passes the performance of the previously-trained DigiRL policy begins to drop. On the other hand, despite having converged in the previous training run, continuing to update the DigiRL agent with more online interactions lead to more robust and stable performance. We believe that this experiment precisely illustrates the issues with stochasticity and dynamic nature of websites: the decay in performance of the optimal policy trained previously underscores the challenge of dynamism and stochasticity of the dynamics of the device control setup, and stable performance with continued DigiRL training demonstrates the efficacy of our approach in enabling performance despite these challenges.
___
### In Figure 7, what does the AWR reweighting mean? Is it simply AWR?
Yes you are correct it is simply AWR. We call it reweighting as it uses reweights to “soft filter” advantageous actions while we use a “hard filtering” for implementation simplicity.
___
### What is the training time and compute requirement for online training?
For our main experiments, we are able to run 8 emulators in parallel on a 16GB T4 GPU with 32 CPUs, and finish an online training run with 1k trajectories within 3 days. Figure 16 in appendix illustrates a relative speed up obtained if we use more resources. For example, if we use 128 CPUs and 4 GPU, we can achieve a speed up of 3.55x when it is set up properly.
---
Rebuttal 2:
Title: references
Comment: [1] Zhang, Chi, et al. ‘AppAgent: Multimodal Agents as Smartphone Users’.
[2] Shinn, Noah, et al. ‘Reflexion: Language Agents with Verbal Reinforcement Learning’.
---
Rebuttal Comment 2.1:
Title: Thanks for the rebuttal
Comment: The rebuttal addressed most of my previous concern. Thanks!
---
Reply to Comment 2.1.1:
Comment: We thank the reviewer for recognizing our efforts in the rebuttal and additional new results to address your previous concerns, and are glad that the additional results address the concerns. Since there is still one more day, we are also wondering if there would be some other discussion or evidence that we can provide in this period to help improve your score of our paper further. Please let us know. We would be very grateful if you are willing to upgrade your score. Thanks a lot! | Summary: This paper tackles AI agent training for controlling digital devices (e.g., web navigation). The proposed framework, named DigiRL, is a 3-stage training process consisting of model pre-training, offline fine-tuning (offline RL), and online fine-tuning (online RL). To achieve this goal, the authors first build a parallelizable Android learning environment that enables fast online interactions for policy learning; they then adopt a VLM-based evaluator to provide reward signals for the agents; finally, they perform ablation studies to examine several key design choices in typical policy-based RL methods for the third stage. Compared to larger models trained without this stage, the proposed approach enjoys significant performance enhancement due to the online fine-tuning stage.
Strengths: + The authors did a good job introducing the background, the problem setup, the baselines, and the details of their proposed method.
+ Fine-tuning large VLMs in an online fashion can be challenging; the performance improvement obtained by the proposed method, which is relatively simple, is substantial and the overall approach looks promising.
Weaknesses: The main issue is the limited comparison with online RL methods for fine-tuning LLM-based agents. The only RL method compared in the experiments is Filtered BC (besides vanilla AWR, which the proposed method is based on). Filtered BC is strictly speaking not an online RL method. Admittedly, AI agent training for device control is a relatively under-explored new area, and the authors claim that theirs is the first successful offline-to-online RL approach for device control AI agents, I believe more experiments with other online RL baselines not originally designed for device control is required to justify DigiRL's advantages. For example, the classic on-policy methods such as REINFORCE and PPO, or the more recent ones that are more sample-efficient, such as [1, 2] (which might be considered more-or-less recurrent work, though). Further comparisons also help to provide more insight into the unique challenges of the device control problem for digital agents.
[1] Trial and Error: Exploration-Based Trajectory Optimization for LLM Agents
[2] REBEL: Reinforcement Learning via Regressing Relative Rewards
Technical Quality: 3
Clarity: 3
Questions for Authors: - Will the proposed method further scale well with more online interactions?
- While multi-turn interactions are a challenge of the device control problem, is there any component in the proposed framework that specifically helps to tackle it?
I am willing to adjust my ratings after seeing the authors' responses.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your review and feedback on our paper. To address the main concern regarding comparisons, we provide additional results and appeal to comparisons from prior work to demonstrate that DigiRL outperforms several online RL methods. These comparisons include REINFORCE, PPO, and Q-value based methods. We also provide **additional results** to show the favorable scaling of our method with more interactions. **Please let us know if your concerns are addressed and if so, we would appreciate it if you might be willing to upgrade your score. We are happy to discuss further.** We answer your questions below:
___
### **[New result] Comparisons to other online RL methods**
We also provide new results for a comparison against several online RL methods.
1) **REINFORCE**: We note that policy gradient via REINFORCE reduces to an online filtered BC loss when the rewards are given by +1 and 0, as a result, our filtered BC results in the paper already indicate that DigiRL outperforms REINFORCE. To see this equivalence between REINFORCE and online filtered BC, note that:
$ L_\text{REINFORCE} = E_{\tau \sim \mathcal{D}}(\sum_{h=1}^{H}r_h)(\sum_{h=1}^{H}\log \pi(a_h|s_h))$ (surrogate for REINFORCE)
and
$L_\text{Filtered-BC} =E_{\tau \sim \mathcal{D}}\mathbb{1}\{\sum_{h=1}^{H}r_h > \text{t}\}(\sum_{h=1}^{H}\log \pi(a_h|s_h))$, for trajectories $\tau$ and threshold $t$.
Since our reward takes a binary 0/1 value at the end of a rollout; both $(\sum_{h=1}^{H}r_h)$ and $\mathbb{1}\{\sum_{h=1}^{H}r_h > \text{thresh}\}$ will only evaluate to 1 if the trajectory is successful or be 0 otherwise, resulting in an identical multiplication factor for both REINFORCE and online filtered BC. Our results already show that DigiRL outperforms online filtered BC, which should imply that DigiRL outperforms REINFORCE as well.
2) **PPO**: We have run experiments comparing DigiRL with online PPO, and we found that DigiRL is more efficient than PPO. The efficiency of DigiRL compared PPO stems from 1) DigiRL starts from an offline RL checkpoint so that it maintains an initial advantage over PPO via the use of offline data, and 2) DigiRL is able to make use of off-policy data to stabilize training while PPO always updates on the small batch of on-policy data. The inefficiency of on-policy PPO corroborates findings in recent multi-turn RL literature [1,2]. We will run this comparison fully and add this baseline in all settings.
3) **Q-value based actor-critic methods.** We have also tried some other designs involving Q-function training [1, 2] for the step-level advantage estimator and found that training a Q-function in all of our experiments obtained inferior performance. Concretely, we attempted to use $Q(s, a) - V(s)$ to compute step-level advantages following Zhou et al. [2]. In this case, we were not able to get the Q-function to correctly pay attention to the action input, leading to Q and V collapsing to very similar values everywhere. We hypothesize this is because it is relatively easy to understand what elements are present in a screenshot, but learning how an action, which appears as a small change on the screen affects the screenshot is more challenging because this requires inferring relationships between precise relative locations of each element and clicking coordinates.
As shown in Table 1 of the 1-page PDF for the Web Shopping subset, the design choice of using $V(s’) + r - V(s)$ to calculate one-step advantage to instead of $Q(s,a) - V(s)$ led to better offline RL performance by 9% and reduced variance by 5.9%. Of course, this does not mean that Q-functions cannot be trained on this task, but that within the time of the rebuttal period, we found it quite hard to get reasonable Q-functions needed for value-based online RL methods.
Finally, thanks for bringing up the concurrent work that we will discuss. We are happy to explore such methods in an updated version of the paper.
___
### **[New result] Scaling with more online interactions?**
In Figure 8 of the submission, we provide a learning curve plotting performance as more online data is collected. We also include a new result Figure 1 of 1-page PDF, where we set the agent to be updated with even more online data for four days after convergence. Compared to a frozen policy, the agent trained with more online interactions data maintains a stable performance despite the changing nature of the websites and the device state, while the performance of a frozen RL policy gradually decays as time goes on. **This indicates that DigiRL utilizes online interaction effectively.**
___
### “While multi-turn interactions are a challenge of the device control problem, is there any component in the proposed framework that specifically helps to tackle it?”
The design of the doubly robust estimator for estimating step-level advantage that balances bias and variance is specifically useful in multi-turn settings, in stochastic environments. Such a design for variance reduction has been shown to be unnecessary in the single-turn setting [3], but our prior results on comparing a one-step advantage estimator $V(s’) + r - V(s)$ and doubly-robust advantage estimator (Eqn. 4.3 in paper) show that it is critical.
Likewise, the use of a curriculum is especially important in highly multi-task environments, where training uniformly on all initial states is likely to not provide a strong signal to update the policy [4] (as shown in the comparison between “Ours w/ step-level advantage” and “Filtered BC” in Fig 7 in paper). We will clarify this.
[1] Song, Yifan, et al. ‘Trial and Error: Exploration-Based Trajectory Optimization for LLM Agents’. ACL 2024
[2] Zhou, Yifei, et al. ‘ArCHer: Training Language Model Agents via Hierarchical Multi-Turn RL’. ICML 2024
[3] Ahmadian, Arash, et al. ‘Back to Basics: Revisiting REINFORCE Style Optimization for Learning from Human Feedback in LLMs’.
[4] Jiang, Minqi, et al. ‘Prioritized Level Replay’.
---
Rebuttal 2:
Title: Raised my score
Comment: Thanks for the rebuttal. I have carefully read all the responses and reviews. I will raise my score. With that being said, I will leave it up to the AC to decide whether the contribution of developing agents is strong enough for the paper's acceptance. I also lowered my confidence as my expertise mostly lies in RL. | Rebuttal 1:
Rebuttal: We would like to thank all the reviewers for their feedback. We are glad that the Reviewer Xhdc thinks that there is “no major weakness” in the paper and that Reviewer zDzF thinks that “the usage of RL for designing a successful agent for device control tasks is fascinating”.
At the outset, we would like to clarify that our goal is to show that autonomous RL can be used to build SOTA device control agents that outperform the dominant approaches in the device-control community including prompting proprietary models (Gemini/GPT-4) and fine-tuning with human demonstrations. Rather than claiming novelty of the algorithm, the main contribution of this work is the efficacy of our effective RL system / approach including the environment setup, usage of autonomous evaluator, and particular design choices specific to the real-world challenges of this environment. On the RL side, our methodological contribution involves identifying and designing the right RL objective to be able to do so (robust advantages + curriculum + cross-entropy loss + AWR). (see Fig.7 in paper and one-page PDF).
In the rebuttal period, we have run additional experiments and plot the results in the one-page pdf, including:
- Table 1: Ablation on Q-function based advantage estimations (Reviewer JuUv, Xhdc, and 251x)
- Table 2: Hyperparameter profiling for step-level advantage estimation (Reviewer 251x)
- Table 3: Comparison with search-based method with autonomous evaluator (Reviewer Xhdc)
- Figure 1: Study on the effect of stochasticity (Reviewer JuUv and 251x)
- Figure 2: Algorithm diagram (Reviewer zDzF)
- Figure 3: Comparison with pure-online setting (Reviewer 251x)
- Figure 4: Comparison with pure-online PPO (Reviewer JuUv)
- Figure 5: Studies on auto-curriculum (Reviewer Xhdc)
We thank the reviewers in advance for paying attention to our new results and clarifications. We look forward to discussions and hope that our responses and the discussion will convince the reviewers that our work is valuable.
Pdf: /pdf/3036837e98f8baed5f0ad2a059f8ba7284d0e482.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Iterative Reasoning Preference Optimization | Accept (poster) | Summary: This work proposes an iterative training algorithm that enhances a model's Chain-of-Thought (COT) capabilities in reasoning tasks by combining self-improvement with preference optimization. The algorithm employs ground truth labels as supervision signals to evaluate model-generated responses, which are then incorporated into subsequent training iterations. Experiments on the GSM8K, MATH, and ARC benchmarks show significant improvements compared to the baseline, with continuous enhancement observed over multiple iterations.
Strengths: This paper introduces preference optimization into the self-improvement of reasoning tasks, yielding excellent results.
Weaknesses: 1. In the experiments, this work conducts experiments on the training sets of the GSM8K, MATH, and ARC benchmarks, and evaluates models on the corresponding test sets. All tests were completed on held-in data, with no performance results provided for the model on held-out tasks.
2. This paper emphasizes the importance of preference optimization, distinguishing it from other iterative training methods like Rest-EM [1], which should be considered as a baseline.
[1] Singh, Avi, et al. "Beyond human data: Scaling self-training for problem-solving with language models." arXiv preprint arXiv:2312.06585 (2023).
Technical Quality: 3
Clarity: 3
Questions for Authors: Please refer to the weaknesses section.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: --
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the comments.
Weakness 1: Correct; we trained using IRPO by leveraging GSM8K/MATH/ARC training set, and we tested on the respective test sets, but not other datasets. This paper doesn’t focus on generalization to other datasets, but the reviewer raises a valid point. We believe that generalizing to other datasets might need a larger number of prompts (e.g., by leveraging a dozen different datasets for IRPO training) and it would be an effort to find good-quality test data as well.
Weakness 2: We were not aware of this paper at the time of writing, and we will cite this paper. But as explained below, in fact, this algorithm is already considered in our baseline.
This paper is similar to STaR (Zelikman et al.) from 2022. Our submission includes a variant of STaR baseline that uses temperature sampling without rationalization (i.e., without providing the correct answer first before generating the problem). In Rest-EM, in the binary reward case, when the solution has the wrong final answer, r=0; in this case we do not take any gradient. Therefore, the Rest-EM’s approach is essentially our variant of STaR. We’ll clarify this piece of related literature and our baselines in our revision, and thank you for suggesting!
---
Rebuttal Comment 1.1:
Comment: Thank you for your response. | Summary: The authors propose iterative RPO method for reasoning tasks. In particular, iteratively, the model at hand will be prompted to generate many CoT reasoning, and the ones that align with the true answers will be used as chosen, the other generated samples as rejected, for a DPO+NLL loss. The authors conduct experiments on GSM8K, ARC, and MATH to demonstrate the superiority of the proposed algorithm.
Strengths: The reasoning task at hand is important to the community, and the proposed algorithm shows promising improvements.
Weaknesses: The novelty between the proposed algorithm and self-rewarding language models seems to be marginal. It also occurs to me that more ablation studies should be conducted, see **questions** for detail.
Technical Quality: 2
Clarity: 3
Questions for Authors: 1. My main concern is where the improvement comes from. In iterative RPO, there are (at least) three sources of improvements (a) more data, and (b) the model is updated iteratively, and (c) preference optimization is used in addition to SFT.
- More data. While in table 1, iterative RPO with twice as much data is compared to STaR. It seems that the data is simply duplicated, but what if we use extra data generated by the model? Currently we are simply training more epochs.
- What if we take the positive and negative samples generated by iterative RPO, and used it to train DPO or SFT? This tells us how much (b) or (c) helps, respectively.
- Currently, is the DPO baseline trained on just (x, y) or (c, x, y)?
2. A less important question is the importance of NLL. My intuition on using NLL is that it regularizes the model from hacking the reward too much, but conceptually, choosing a larger $\beta$ does the same thing -- it enforces a larger KL regularization between the SFT model and the trained model. If you vary $\beta$, will figure 2 still hold?
Confidence: 3
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: Limitations have been discussed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the feedback and insights.
Re: novelty.
- Our method is not self-rewarding because the reward signals come from the ground-truth labels instead of the model itself. More specifically, self-rewarding LM requires generating prompts, and using LMs to evaluate the sampled generations. But LMs are horrible at evaluating reasoning-related generations (e.g., according to RewardBench – although there has been some progress in the past few weeks). So we believe that self-rewarding LMs cannot be successfully directly applied using llama-2-70b-chat, but using IRPO in an unsupervised fashion is a really promising research avenue!
- We are proposing a loss that integrates NLL while the self-rewarding LM uses standard DPO training. Through our experiments (in this IRPO submission), we demonstrate the importance of the NLL term on positive examples.
- The focus of self-rewarding was general instruction tuning and our paper focuses on reasoning, math, and science reasoning in particular. In fact, the self-rewarding did not bring improvement in the math category.
Q1:
- IMPORTANT CLARIFICATION on “more data”: The “twice as much data” baseline means that we actually *generated* twice as much data – there is no duplication. We apologize for the confusion and will emphasize this point in the paper.
- On the second bullet point: “Using the positive & negative samples for DPO” is actually our DPO baselines in the tables. “Using positive samples for SFT” is actually the STaR baseline in the tables.
- On the third bullet point: (c, x, y). The DPO baseline is essentially RPO but without the NLL term.
Q2: Yes. We experimented with different beta’s (0.05, 0.1, 0.5, 1.0) and the probability trends are similar within each dataset. We are investigating why sometimes naive DPO leads to decreasing chosen AND rejected probabilities like in Figure 2. Our current hypothesis is that it’s related to the fact that both current and rejected generations are sampled from the most recent model – both generations have high probabilities under the most recent model distribution.
---
Rebuttal Comment 1.1:
Comment: Thanks for the response. With regard to Q2 second point, I meant train DPO and STaR on the data generated by 4 iterations of Iterative RPO and see how more generated data can help. Meanwhile, since Iterative RPO also incorporates NLL Loss, for fair comparison, the DPO baseline should also incorporate NLL so that we can understand where the improvement comes from.
---
Reply to Comment 1.1.1:
Comment: To reviewer E9gS:
Thank you for your reply!
Do you mean the second point of Q1 (instead of Q2)?
- When training for M2, we’ve tried using iteration 1 data and iteration 2 data **together** (rather than our proposed method of iteration 2 data only, but initialized from M1). The result for gsm8k is actually worse than **only** using iteration 2 data initialized from the iteration 1 model M1 (72.1 instead of 73.1). This indicates that **the data quality is more important than the amount**.
- Using data from iterations necessarily requires iterations in the first place. Collecting 4 iterations of data requires IRPO training iterations in the first place, so even if training on that data works we cannot conclude that iterations are unnecessary.
If the reviewer is wondering where the improvement comes from, it comes from 2 sources:
- **It comes from the added NLL term**, because our approach outperforms the model trained without the NLL term (regular DPO) in the first iteration.
- **It comes from iterations**, because more generated data (from the current model) don’t help as much. 2x data (no duplicates) doesn’t work as well – see last two rows of Table 1. In contrast, training on higher quality data generated from the improved model from the last iteration brings better performance.
These two are the sources of improvement. | Summary: This paper proposes a novel method for preference optimization for reasoning tasks. Their method involves prompting models to generate the CoT reasoning steps and answers for a set of reasoning task inputs, then labeling samples as correct or incorrect using the ground truth outputs, and training the preference model. The authors use standard DPO loss with the addition of a negative sampling loss using samples for which the model generates incorrect outputs. They train models iteratively, using the previous generation as the base model to generate new outputs at each step. They observe that using this method they are able to improve significantly over DPO for three reasoning tasks.
Strengths: RPO demonstrates performance boosts on a variety of tasks, outperforming DPO and SFT. Improving model performance for reasoning tasks has shown to be a challenging task, and this method shows promise for improving model performance in this area.
Weaknesses: 1. Although the authors present ablations on the number of data points for RPO, they do not present experiments with simply training the model for longer. This is true for the comparisons to DPO as well. Though iterative training likely is playing a role in the performance of the model, for the sake of careful analysis, it would be good to compare the performance of a model trained for multiple generations vs a model trained for only one generation but an equivalent number of steps.
2. I was not able to find details on the hyperparameters used for DPO or SFT or how optimized they were. In contrast, the hyperparameters for RPO are chosen carefully. While it's unlikely all performance boosts are due to this, it should be clear how hyperparameters were chosen for the sake of comparison and reproducibility.
3. Though the comparison of DPO with and without NLL shows that the loss for rejected samples decreases when NLL is applied, it does not demonstrate why this is important for model improvement. A more effective analysis would compare the mistakes made by the model with and without this loss (e.g. FNR/TNR).
Technical Quality: 2
Clarity: 3
Questions for Authors: 1. What hyperparameters are used for DPO and SFT? Are they tuned with the same care as those for RPO?
2. Do you explore the reasoning steps generated by models? How often are they actually correct?
3. What is the cost of training models using this method?
Confidence: 3
Soundness: 2
Presentation: 3
Contribution: 3
Limitations: The authors address limitations in the appendix.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the detailed review and suggestions!
**Weakness 1 (longer training)**: Thank you for pointing out this issue. In each iteration, we do train longer (5000 steps) but end up selecting an earlier checkpoint – the selected checkpoints (by validation accuracy) are usually trained for 2000 or 3000 steps (e.g., for GSM8K, M1 is trained for 2000 steps and M2 for 2000 steps). This is because training longer makes the model overfit to the training and hurts validation performance. We use the same checkpoint selection process for both RPO and DPO. We’ll include more detailed discussion in the revision.
Importantly, in Table 1 of the submission, we also included results where the model is trained on twice as much data instead of two iterations (using STaR and using iterative RPO, also for a max of 5000 steps). As shown in Table 1, this strategy (the last row) doesn’t match the performance of doing two separate iterations (2nd row). This highlights the advantage of iterations over longer training.
Another piece of evidence (that iterative training is helpful) is that recent work (Preference Fine-Tuning of LLMs Should Leverage Suboptimal, On-Policy Data; https://arxiv.org/abs/2404.14367) has demonstrated the importance for on-policy data (i.e., iterations). We’ll make sure to discuss this issue more thoroughly.
**Weakness 2 (hyperparameters for baselines)**: We conducted a grid search for SFT, iteration 1 of DPO, and iteration 1 of RPO. For iteration>1, we use the same hyperparameter as iteration=1 (except for num_training_step which is selected individually for each iteration). All experiments share a similar config file; therefore, for all baselines, learning rates are all tuned from the range of 5e-7 to 5e-6. DPO and RPO are tuned in the same sets of hyperparameters (if the hyperparameter exists in both methods). We will clarify this issue, and thanks for pointing this out.
**Weakness 3**: With DPO training, the probability of rejected samples decreases regardless of NLL (see Fig3 a). Addition of NLL affects the chosen samples and makes them go up in probability. In contrast, their probability doesn’t go up much in DPO training without NLL. The chosen samples are correct solutions, so their higher probability means the model is likely to generate that correct solution. The direct way to measure if the model is generating correct solutions or not is to look at the test set accuracy, which we report in the paper.
In other words, our hypothesis is that without NLL, both chosen and rejected probabilities decrease, and given that probabilities sum to one over sequences, the probabilities might be going to unforeseen low-quality sequences.
Q1: Yes. Please see weakness 2 above. Thanks again for raising this point.
Q2: We have eyeballed ~30 generations for each dataset; to the best of our ability (given that MATH is difficult), if the answer matches, then our generated CoT is almost always correct for these particular datasets, especially GSM8K and MATH.
Q3: Generation is relatively cheap these days given the low-precision toolkits (e.g., vLLM; https://github.com/vllm-project/vllm) – using 8 V100 (32GB) GPUs, we can sample a few thousand responses (for our tasks) from 70B models in just a few minutes. All training in this paper is done using eight nodes, each containing eight A100 GPUs (briefly mentioned in Section 3); each 10 steps take around 2 minutes; training 5000 steps (plus validation and checkpoint saving) takes less than a day.
---
Rebuttal Comment 1.1:
Comment: My apologies for the late reply and thank you for clarifying these points. The clarifications in this rebuttal and others have cleared up the main concerns I had, and I'm comfortable to increase my score slightly. | Summary: The paper introduces a novel approach to improve the performance of language models on reasoning tasks through iterative preference optimization. It proposes an iterative method that generates multiple reasoning steps and final answers, constructs preference pairs based on the correctness of the answers, and then optimizes these pairs using a modified Direct Preference Optimization (DPO) loss combined with a negative log-likelihood (NLL) term. This iterative process results in progressively improved model performance. The approach demonstrates significant accuracy improvements on the GSM8K, MATH, and ARC-Challenge datasets using the Llama-2-70B-Chat model, outperforming other models that do not rely on additional datasets. Key contributions include the iterative application of DPO with an NLL term for reasoning tasks, comprehensive experimental validation, and performance gains without the need for additional human-annotated data.
Strengths: 1. The paper is well-written, and easy to follow.
2. The idea is clean and the contribution is clear.
3. The evaluation and improvements are convincing and significant.
Weaknesses: I do not see significant weakness.
Technical Quality: 4
Clarity: 4
Questions for Authors: Although the authors have discussed other related iterative preference optimization methods (e.x. Iterative DPO, Self-rewarding, SPIN), why not include their results in the evaluation tables?
Confidence: 2
Soundness: 4
Presentation: 4
Contribution: 4
Limitations: See Questions.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the review and the support!
We presented DPO results in the paper. We showed that DPO in iteration 1 does significantly worse than our RPO’s iteration 1; hence we did not try further iterations of standard DPO (aka Iterative DPO). The reviewer raises a good point that we can check what future iteration results are like. SPIN doesn’t include the NLL term in IRPO, so it’s closer to iterative DPO than IRPO; moreover, SPIN assumes always using the reference CoT but we do not; they also report only modest gains in reasoning tasks.
The reviewer raises a good point about self-rewarding LM. The original algorithm requires generating prompts and evaluating the sampled responses using the LM itself. IRPO assumes knowing what the correct answer is (e.g., the answer may be 7.5 for a math question), but we wouldn’t know the correct answers for augmented prompts. LMs are quite bad at generating evaluations of a response (e.g., according to RewardBench – although there has been some progress in the past few weeks). So we believe that self-rewarding LM cannot be successfully directly applied using llama-2-70b-chat, but using IRPO in an unsupervised fashion is a really promising research avenue to keep exploring!
---
Rebuttal Comment 1.1:
Comment: Thanks for your response. | null | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Dynamic Subgroup Identification in Covariate-adjusted Response-adaptive Randomization Experiments | Accept (poster) | Summary: The paper proposes a dynamic subgroup identification strategy within covariate-adjusted response-adaptive randomization (CARA) for clinical trials. This adaptive method dynamically identifies and adjusts treatment allocation to the best-performing subgroups based on ongoing trial data, and thus extend the traditional fixed-design trials to an online setup. Moreover, the paper also makes theoretical contributions by showing validity, asymptotic normality and efficiency of the estimator for the best subgroup treatment effect.
Strengths: Originality
The paper makes a significant original contribution by introducing a novel dynamic subgroup identification strategy within covariate-adjusted response-adaptive randomization (CARA) for clinical trials. This approach is innovative in its dynamic adjustment of treatment allocation based on real-time data, addressing inefficiencies associated with traditional fixed-design trials. This work also derives new theoretical results justifying the use of their proposed design.
Quality
The research is of high quality, demonstrated through both theoretical and empirical validations. The authors provide rigorous theoretical results, including validity, asymptotic normality and semiparametric efficiency. Additionally, the empirical validation using a synthetic clinical trial on cirrhosis data is well-executed and convincingly demonstrates the effectiveness of the proposed design.
Clarity
The paper is clearly written and well-structured. Complex definitions and algorithms are clearly demonstrated.
Significance
I consider this a significant work in the field of sequential experimental design and subgroup identification. It explores how to best identify and estimate subgroup effects in a dynamic regime, which is underexplored in the literature. What's more, it has broader impact in precision medicine and clinical trials.
Weaknesses: Overall the paper is technically solid, though I notice some issues that can be addressed in the revision. First, the clarity of this paper can be improved as I identified some sentences/equations that are confusing. Please check the question section for details. Second and more importantly, the way that the paper formulates the problem needs some further justification, which is also related to Q2 in the question section. The paper proposes a design to maximize the probability of correctly identifying the best subgroup, and estimate the best treatment effect; the theory and experiments all focus on this best subgroup identification. However, in personalized medicine which is the motivation for this work, I suppose practitioners care more about whether the treatment is beneficial to a certain group of patients or not, rather than the best treatment effect. In other words, even one can efficiently estimate the best treatment effect using CARA, it does not guarantee the best causal decision rule in personalized medicine. See https://pubsonline.informs.org/doi/10.1287/ijds.2021.0006 for a relevant discussion. Therefore, I suggest the authors discuss this gap between your work and practical considerations; this will help further justify the necessity of best subgroup identification.
Technical Quality: 4
Clarity: 3
Questions for Authors: 1. Page 3, bottom equation: In the equation at the bottom of this page, should $=$ be $\coloneqq$, i.e., a definition?
2. Page 3, Line 127-128: The goal of the design is to maximize the correct identification probability, based on which the paper proposes CARA and develops the corresponding theory. However, why maximizing the correct identification probability is the ultimate goal? In precision medicine, a more plausible goal is to maximize the overall welfare for the patients, i.e., finding the design that best improves the medical outcome for all patients. How do you compare your design objective to this welfare maximization objective? Can you modify your design for the second objective?
3. Page 4, Eq (1): This equation is confusing. What is the objective function for $\mathbf{e}$? I only saw the constraint set inside the parenthesis.
Confidence: 4
Soundness: 4
Presentation: 3
Contribution: 3
Limitations: As discussed in the paper, one main limitation is that the method cannot handle delayed outcomes, which can be restrictive in the real-world clinical trials. In my opinion, the authors have adequately addressed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: - Thank you for your insightful questions regarding the design objective and for kindly pointing us to the reference.
- We completely understand your concern about the practicality of identifying the best set of subgroups rather than all the benefitted ones. In many clinical settings, treating all the benefitted patients is indeed a natural and ethical goal from the practitioner's standpoint when the number of treatments is unlimited. Different from this setting, our design is tailored to situations where resources are limited, making it preferable to identify only the best set of subgroups. For example, during the onset of COVID-19, when the total number of vaccines was limited, only a subset of the population could be treated. The design objective of identifying the best set of subgroups thus becomes more relevant.
- Furthermore, we would like to point out a subtle difference between our design objective and the design objective of maximizing overall welfare. Our work considers situations where assigning a treatment is costly, and there is an overall constraint on how many treatments can be deployed. In Fernandez-Loria and Provost (2022), for example, the goal is to ensure all the benefitted subjects are precisely assigned to the treatment arm while the cost of treatments is negligible. They aim to learn the optimal policy that maps a given covariate to the best arm for that patient profile. In our work, we consider scenarios where implementing the treatment can be costly per sample of a patient (experimentation unit). For example, in clinical settings, randomized experiments are often expensive due to the substantial costs of treatment medications. Consequently, the resource constraints in our problem are equivalent to the number of treatments that can be administered.
- We appreciate the reference you provided, which we found very helpful. We agree that efficiently estimating the best causal effect can sometimes misalign with the best causal decision rule, especially when the best decision rule is to always treat a patient when there is a positive effect. In such cases, accurately estimating the magnitude of the treatment effect becomes less important. Since our primary objective is not to assign all subjects with positive treatment effects to the treatment arm, there could be a misalignment with the best causal decision rule discussed in Fernandez-Loria and Provost (2022). Nevertheless, if we are in a scenario similar to the previously mentioned COVID-19 vaccine example, where the best decision rule is to identify the most benefitted subgroups to prioritize treatment assignments, the accurate causal estimation aligns well with the best causal decision-making. We agree that it is important to be mindful of whether the design objective leads to the best causal decision rule, and the potential misalignment between causal effect estimation and causal decision-making should be carefully discussed. We hope to add this discussion to our revised manuscript.
- While our current design is not tailored to welfare maximization, we shall discuss a potential approach to refining our design toward maximizing participant welfare. Consider a two-stage experiment with four candidate subgroups and their population treatment effects follow the order: $\tau\_1=\tau\_2 > \tau\_3 > \tau\_4$, where $\tau\_1 = \tau\_2 > \tau\_3 > 0 \geq \tau\_4$. At the end of Stage 1, our current procedure will be able to correctly identify subgroups 1 and 2 as the best set of subgroups with a high probability. If we were to maximize the patient's welfare, we would add an additional ``early stopping" step. That is, besides identifying the best set, we will also identify subgroups that exhibit significantly adverse treatment effects. To protect the patient's welfare, we will stop Subgroup 4 from enrollment in the next stage. In Stage 2, we not only avoid impairing the welfare of Subgroup 4 but also maximize the resources (treatments) available to the rest of the benefitted subgroups.
- Thank you for your careful reading of our manuscript. We shall clarify our optimization problem formulation as follows. Our original optimization problem is formulated as $\max_{ \mathbf{e}} \min\_{2\leq j\leq m^\ast\_{t}} \frac{( \hat{\tau}\_{t-1,(j)} - \hat{\tau}\_{t-1,(1)})^2}{2\big(\hat{\mathbb{V}}\_{t-1,(1)}(e\_{1}) + \hat{\mathbb{V}}\_{t-1,(j)}(e\_{j})\big)}, \text{s.t.}\ \sum\_{l=1}^{m^\ast\_{t}} \hat{p}\_{tl} e\_l \leq c\_1, \ c_2\leq e\_l \leq 1-c\_2,\ l=1, \ldots,m^\ast\_{t}$. The set of constraints includes the resource constraint: $\sum\_{l=1}^{m^\ast\_{t}} \hat{p}\_{tl}\leq c\_l$, and the feasibility constraint: $c_2\leq e\_l \leq 1-c\_2,\ l=1, \ldots,m^\ast\_{t}$. Because the original objective function takes the minimum of $m^*-1$ rate functions, the original optimization problem is nonlinear. We instead work with its equivalent epigraph representation: $\max\_{ \mathbf{e}} z$, s.t. $ \min\_{2\leq j\leq m^\ast\_{t}}\frac{( \hat{\tau}\_{t-1,(j)} - \hat{\tau}\_{t-1,(1)})^2}{2\big(\hat{\mathbb{V}}\_{t-1,(1)}(e\_{1}) + \hat{\mathbb{V}}\_{t-1,(j)}(e\_{j})\big)} -z \geq 0, \ \sum\_{l=1}^{m^\ast\_{t}} \hat{p}\_{tl} e\_l \leq c\_1, \ c_2\leq e\_l \leq 1-c\_2,\ l=1, \ldots,m^\ast\_{t}$, which is Eq (1) in our current submission. The original formulation of the objective function was omitted due to space limits. We will clarify the objective function in our revision.
Per your suggestions, we plan to make the following updates to our manuscript:
- We will add a discussion on the potential gap between our design and the best causal decision rule in precision medicine scenarios in our revised manuscript.
- We will revise sentences and notations for clarity.
- We will provide more illustrations of Eq (1).
Reference:
- Fernandez-Loria, C. and Provost, F. (2022). Causal decision making and causal effect estimation are not the same... and why it matters. INFORMS Journal on Data Science, 1(1):4–16.
---
Rebuttal Comment 1.1:
Comment: Thanks for the clarification, especially on the distinctions between your work and welfare maximization objective. I have no further concerns. | Summary: This paper introduces a new dynamic treatment assignment for clinical trial to target treatment to the group most likely to benefit from it.
Strengths: The paper studies a critical problem of clinical trial design. It is clear and provides both theoretical justification and synthetic validation, demonstrating the utility of the proposed method.
Weaknesses: While the paper tackles a critical problem, the problem is studied in depth in biostatistics. While I am unfamiliar with the literature on this topic, I am surprised there is none that could be considered for comparison. A more in-depth analysis of the literature on this topic should be presented in the Appendix to justify the choice of compared methods.
Technical Quality: 3
Clarity: 3
Questions for Authors: Following the previous point, I would recommend a more in-depth review of the literature to convince the reader that this problem does not have alternative solution in the literature.
Confidence: 2
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The paper discusses the critical limitation of the proposed approach's assumption of instantaneous access to the outcome following treatment. It would be interesting to discuss this assumption in the context of the existing literature. Is it a common assumption? If not, it would be important to justify further.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your valuable questions and suggestions!
A main limitation of our proposed approach is the assumption that outcomes are observed instantaneously following treatment. This assumption, prevalent in adaptive experiments such as Hu et al. (2015) and Zhu and Zhu (2023), simplifies the modeling process and allows for quick adjustments based on the latest data. While this assumption is common, it may not always reflect real-world scenarios where response delays. To provide a more comprehensive understanding, we review the literature addressing delayed responses.
- The importance of incorporating delayed responses in adaptive experiments is well-recognized in the literature. Rosenberger et al. (2012) discuss the effects of delayed responses on response-adaptive randomization. Early work by Wei and Durham (1978) introduces the randomized play-the-winner rule, which updates the contents of an urn only upon receiving patient responses, thus naturally accommodating delayed responses. This approach offers inherent flexibility in managing delays and represents an improvement over more rigid methods. Bai et al. (2002) and Hu and Zhang (2004) establish asymptotic normality results under urn models with delayed responses, explicitly concerning the asymptotic normality of the fraction of patients assigned to each treatment arm. Their findings show that limiting distributions remain unaffected by delayed responses under reasonable conditions, providing a solid theoretical basis for handling such delays. Zhang et al. (2007) extend this work by introducing a generalized drop-the-loser urn model, demonstrating that asymptotic properties are preserved despite response delays. This generalized model offers broader applicability and enhanced flexibility for managing various delay mechanisms.
- Regarding the doubly-adaptive biased coin design (DBCD), Zhang and Rosenberger (2006) have shown through simulations that moderate delays in responses have minimal impact on the power and skewness of the DBCD. Their results suggest that the design remains robust even in the presence of delays, although the effects of more severe delays are less clear. Hu et al. (2008) study the asymptotic properties of the DBCD with delayed responses, showing that these properties remain unaffected by such delays. They also provide strong consistency results for the constructed variance estimator that incorporates delayed
responses, enhancing the design's reliability in practical scenarios.
- In addition to response-adaptive randomization designs, some group-sequential designs address the challenges brought by delayed responses. Hampson (2013) proposes to incorporate short-term endpoints to enhance the efficiency of group-sequential tests when long-term responses are delayed. This method effectively balances immediate data needs with constraints imposed by delayed outcomes. Schuurhuis et al. (2024) expand this framework by suggesting the integration of pipeline data into group-sequential designs, enabling the trial to restart after a temporary halt. This integration provides added flexibility and robustness in managing trials with delayed responses.
- Overall, handling delayed responses requires challenging adjustments to our design. First, our objective function, based on the semiparametric efficiency bound of the subgroup treatment effect, must be revised. The difficulty stems from the function's current assumption of immediate responses, and modifying it to account for delays significantly complicates the task of ensuring accuracy.
Second, our estimators need updating. Delayed responses impact both the treatment effect and variance estimators, necessitating an additional step to address these delays. While this step is crucial for maintaining precision and reliability, it also introduces complexities that must be managed to ensure a robust estimation process.
Per your suggestions, we plan to make the following updates to our manuscript:
- Present a more in-depth analysis of the literature in our revised manuscript.
- Discuss the critical limitation in the context of the existing literature in our revised manuscript.
Reference:
- Hu, J., Zhu, H., and Hu, F. (2015). A unified family of covariate-adjusted response-adaptive designs based on efficiency and ethics. Journal of the American Statistical Association, 110(509):357–367.
- Zhu, H. and Zhu, H. (2023). Covariate-adjusted response-adaptive designs based on semiparametric approaches. Biometrics.
- Rosenberger, W. F., Sverdlov, O., and Hu, F. (2012). Adaptive randomization for clinical trials. Journal of Biopharmaceutical Statistics, 22(4):719–736.
- Wei, L. and Durham, S. (1978). The randomized play-the-winner rule in medical trials. Journal of the American Statistical Association, 73(364):840–843.
- Bai, Z., Hu, F., and Rosenberger, W. F. (2002). Asymptotic properties of adaptive designs for clinical trials with delayed response. The Annals of Statistics, 30(1):122–139.
- Hu, F. and Zhang, L.-X. (2004). Asymptotic normality of urn models for clinical trials with delayed response. Bernoulli, 10(3):447–463.
- Zhang, L.-X., Chan, W. S., Cheung, S. H., and Hu, F. (2007). A generalized drop-the-loser urn for clinical trials with delayed responses. Statistica Sinica, 17(1):387–409.
- Zhang, L. and Rosenberger, W. F. (2006). Response-adaptive randomization for clinical trials with continuous outcomes. Biometrics, 62(2):562–569.
- Hu, F., Zhang, L.-X., Cheung, S. H., and Chan, W. S. (2008). Doubly adaptive biased coin designs with delayed responses. Canadian Journal of Statistics, 36(4):541–559.
- Hampson, L. V. and Jennison, C. (2013). Group sequential tests for delayed responses (with discussion). Journal of the Royal Statistical Society Series B: Statistical Methodology, 75(1):3–54
- Schuurhuis, S., Konietschke, F., and Kunz, C. U. (2024). A two-stage group-sequential design for delayed treatment responses with the possibility of trial restart. Statistics in Medicine.
---
Rebuttal Comment 1.1:
Title: Maintain score
Comment: Thank you for adding the literature review, I am maintaining my score | Summary: This paper studies an interesting problem in clinical trials to identify patient subgroups with the most beneficial responses to the treatment, which is essential for clinicians to create personalized treatment plans for their patients. However, most existing strategies for the design of clinical trials rely on domain knowledge or past experience of expert clinicians and stick to several pre-defined patient subgroups throughout the trial, which discards the valuable information collected from different trial stages. Some adaptive experimental strategies are developed to identify the best performing patient subgroup based on the trial outcomes; but their negligence of other subgroups where the treatment could be equally effective usually causes inefficient utilization of the experimental efforts. To tackle these challenges, the authors propose a dynamic subgroup identification approach to enable the construction of more effective experimental strategies for practical clinical trials. The authors claim three major contributions in this study: 1) their approach allows the dynamic identification for best patient subgroups based on experimental data collected during the trial process; 2) the authors develop new algorithms to effectively merge patient subgroups with similar (highest) responses to the treatment and provide theoretical results to support their analyses; and 3) the proposed method is validated using a synthetic dataset constructed from a clinical trial on cirrhosis.
Strengths: ## Originality
The method presented in this paper looks to be novel. The authors have provided substantial theoretical analysis to evident the originality and validity of their approach.
## Quality
Most parts of this paper are well-written and properly organized. The experimental results are reported with relevant statistics and are clearly evaluated and discussed with texts and visualizations.
## Clarity
The general clarity of this paper is fair. Experimental results in this paper are sufficiently discussed.
## Significance
The problem of patient subgroup identification is essential in clinical trials for the selection of target patient cohorts. For better utilization of the resources in a trial, dynamic identification of best performing patient subgroups and adaptive treatment assignments are imperative. The approach proposed in this paper enables effective patient subgroup identification using patient characteristics and treatment responses collected during different trial stages and allows adaptive optimization of the treatment assignment strategy.
Weaknesses: ## Related Works
There are two short paragraphs in the Introduction discussing the literature related to this study. However, to distinguish the experimental design approach in this paper from other research and highlight the contributions of this study, a more comprehensive comparison with related works is needed. It seems that there are many other studies, with similar focuses on subgroup identification in clinical trials, not sufficiently discussed in this paper. For instance:
- Adaptive identification and assessment of patient subgroups [1]
- Identification of patient subgroups with similar clinical characteristics (covariates) [2]
- Clustering of patient subgroups with different levels of benefits in clinical trials [3]
The authors are encouraged to provide discussions and comparisons with related works on patient subgroup identification to better demonstrate the novelty and advantage of this study.
Further, although the authors emphasize that their method focuses on the setting of covariate-adjusted response-adaptive (CARA) experiments which is an under-explored area, the reviewer cannot find discussions on the contribution or benefit of including patient covariates ($X$) in the design of clinical trials. The covariates-based patient subgroup identification has already been studied in the machine learning literature, e.g., [2,3].
References:
[1] Guo, Wentian, Yuan Ji, and Daniel VT Catenacci. "A subgroup cluster‐based Bayesian adaptive design for precision medicine." Biometrics 73.2 (2017): 367-377.
[2] Lee, Beom S., et al. "A clustering method to identify who benefits most from the treatment group in clinical trials." Health Psychology and Behavioral Medicine: an Open Access Journal 2.1 (2014): 723-734.
[3] Xu, Jie, et al. "Machine learning enabled subgroup analysis with real-world data to inform clinical trial eligibility criteria design." Scientific Reports 13.1 (2023): 613.
## Clarity
### The role of patient covariates
As mentioned earlier, the role of patient covariates is not sufficiently discussed in the proposed experimental design approach. The only place related to the covariates seems to be line 7 of Algorithm 2 where previous trial results are randomly resampled. It is unclear how the experimental design is adjusted based on patient covariates. Since this paper focuses on the CARA setting and considers both treatment responses and patient covariates in subgroup identification, it is important for the authors to elaborate the contribution of patient covariates in the proposed algorithms and how is this study different from conventional research on response-adaptive randomization (RAR) settings.
### Notations
There are many symbols introduced in the analysis of this paper without any explanation.
For instance, the variable $\hat{p}\_{tl}$ in Eq. 1 has never been explained.
The symbol $\mathcal{B}_{t,b}$ in Eq. 6 comes from nowhere.
In the meantime, there are so many similar symbols used in the discussion, and it is very difficult for the reviewer to tell their difference. The authors are encouraged to ensure the consistency in their notation. If possible, a notation table could help a lot to improve the clarity of this paper.
### Insufficient explanations
Some results or derivations in this paper are introduced without proper explanation. For instance, although the authors provide citations to the large deviation theory in LN 129, the equivalence between the correct identification probability and the optimization objective remains obscure. Similarly, it is unclear why the optimal treatment allocation can be derived from Eq. 1. Additionally, the hard-coded exponent 0.05 for $\Delta$ in LN 162 appears without any explanation.
## Correctness
The correctness of some proofs seems to be questionable. For instance, in the proof of Theorem 1, it is unclear why $\tau_{t,l} = \tau_{l}$. The inequality in LN 450 cannot be directly established according to the analysis in LN 451 – 453. Similarly, the final inequality in LN 457 is non-trivial and the authors should provide analysis to prove its correctness.
## Evaluation
### Dataset
The proposed experimental design approach is only evaluated on a synthetic dataset. To validate the general applicability and performance of this method, benchmark results on more datasets are necessary.
### Baselines
The authors have discussed many relevant studies in the Introduction. However, only three baselines are considered in the experiment. For a more comprehensive comparison, the authors are encouraged to include more baselines from related works to demonstrate the advantage of their method. Particularly, the reviewer is interested in the performance of contextual bandit algorithms and causal tree models.
### Metrics
Note that the experimental results are obtained on a synthetic dataset where the ground truth subgroup labels are available. The authors are encouraged to include additional metrics on clustering accuracy, e.g., purity score, normalized mutual information, etc., in the benchmark.
Technical Quality: 2
Clarity: 1
Questions for Authors: In summary, I have the following concerns about this paper.
1. The related works are not sufficiently discussed. The difference between this paper and relevant studies should be clearly illustrated.
2. It is unclear how the proposed method is different from conventional methods considering the RAR settings. There is no discussion on the contribution or benefit of including patient covariates ($X$) in the design of clinical trials.
3. Many symbols are introduced in the analysis of this paper without proper explanation.
4. The derivation of some key results is not sufficiently explained. For instance, the optimization objective below LN 129 and the optimal treatment allocation in Eq. 1.
5. The correctness of some proofs seems to be questionable. Specifically, the proof for Theorem 1. Please see the weakness section above for details.
6. The proposed method needs to be tested on more datasets, and the authors are encouraged to include more performance metrics in the benchmark.
7. There should be more baselines, e.g., causal tree and contextual bandit, in the benchmark to highlight the advantage of the proposed method.
8. How does the number of stages $T$ in a trial affect the convergence of the proposed method? What if there is only one stage? What if $T$ is small (Below 4)?
Confidence: 5
Soundness: 2
Presentation: 1
Contribution: 2
Limitations: The limitations of this study are briefly discussed at the end of this paper. The authors identify the potential mismatch between their assumption on immediate observation of treatment responses and the delays commonly observed in real-world clinical trials. However, the potential negative societal impact of the clinical trial design approach in this paper is not sufficiently discussed. In addition, the authors are encouraged to discuss whether their approach can be generalized to deal with scenarios with multiple treatment options.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: 1. Thank you for pointing us to the references. While both Guo et al. (2017) and ours are in an adaptive experiment setting, there are two major differences. (i)Their design is Bayesian, relying on prior specification, while ours is frequentist and model-free. (ii)They do not discuss theoretical properties for the identified subgroups, while we provide theoretical results justifying the reliability of our approach. Next, there are two lines of literature for subgroup identification: (1)post-hoc analyses using previously collected data, and (2)adaptive data collection through randomized experiments. Lee (2014) and Xu (2023) align with the first one, whereas our method focuses on the second. Lee et al. (2014) use clustering techniques based on data from prior randomized controlled trials, and Xu et al. (2023) identify subgroups from existing observational data using machine learning, both differing from our adaptive experiment setting.
2. In classical CARA design (so does our design), patient covariates are used to define subgroups. Concretely, denote the covariate as $X_{it}$ and assume the covariate space $\mathcal{X}$ is partitioned into $m$ regions: $\mathcal{S}_{j=1}^m$ (lines 98-100). CARA designs differ from RAR as RAR does not consider covariates at all, optimizing treatment allocation solely based on past treatment assignments and outcomes. This oversight can lead to less effective treatment strategies by ignoring valuable patient-specific information that could significantly influence treatment responses. We hope to note that in response to your Q7, we refined our approach using the augmented Inverse Probability Weighting estimator, allowing us to incorporate other covariates that are not used to define subgroups, enabling us to adjust treatment allocation accordingly.
3.We apologize for missing the definition of $\hat{p}_{tl}$. It is defined as $\hat{p}\_{tl}=\frac{\sum\_{s=1}^{t}\sum\_{i=1}^{n\_s}\mathbb{1}\_{(X\_{is}\in\mathcal{S}\_{(l)})}}{\sum\_{s=1}^{t}n_s}$. ${B}\_{t,b}$ is defined in Eq.6 in preparation for the calculation of the loss function in Eq.7. Additionally, $\Delta=\min(1,R\cdot(\frac{\sqrt{\sum\_{j=1}^{m}n\_{tj}\hat{\mathbb{V}}\_{tj}/m}}{\sqrt{\sum\_{j=1}^{m}\hat{\mathbb{V}}\_{tj}/n}})^{2*\gamma})\approx \min(1,R\cdot n^{\gamma})$, where $\gamma$ is a small tuning parameter to ensure $\Delta<1$. We choose $\gamma=0.05$ and show that our procedure is not sensitive to the tuning parameter(Table 1 in the attached pdf).
4. Due to the space limit in our first submission, we neglected technical details to justify Eq.1. In fact, the rate function in the optimization problem's objective is a monotone transformation of the correct identification probability in an asymptotic sense. Below is a brief derivation due to the space limit: $\lim\_{N\rightarrow\infty}\frac{1}{N}\log(1-\mathbb{P}( \hat{\tau}\_{\mathcal{T}\_1}\geq\max\_{j \notin\mathcal{T}\_1}\hat{\tau}\_j))=-\min\_{j\notin \mathcal{T}\_1}G(\mathcal{S}\_1,\mathcal{S}\_j;e\_1,e\_j)$. We are happy to provide more details if you raise additional concerns.
5. After inspecting our proof, we believe that our proof in the three places you pointed to us is correct. $\tau\_{t,l}=\tau\_l$ naturally holds by Assumption 1. This is because potential outcomes are independently identically distributed for $i=1,\ldots,n_t$, $t=1,\ldots,T$, implying $\tau\_{t,l}=\tau\_l$ in the proof of Theorem 1. This i.i.d assumption is commonly assumed in adaptive design literature. With the analysis in LN 451-453, we have $\lim\_{N\rightarrow\infty}\mathbb{P}(N^{\delta}|\tau\_{j}-\tau\_{\check{1}}|<C)=0$ and thus $\lim\_{N\rightarrow\infty}\mathbb{P}( N^{\delta}|\tau\_{j}-\tau\_{\check{1}}|+C< 2C)=0$. By Lemma 2 in the Appendix, we obtain $\lim\_{N\rightarrow\infty}\mathbb{P}(N^{\delta}|\hat{\tau}\_{tj}^*-\tau\_{j}| +N^{\delta}|\hat{\tau}\_{t,\check{1}}^*-\tau\_{\check{1}}|\geq 2C)=0$. Then we reach the conclusion in Eq.13. Similarly, with the analysis in LN 458-461, we have $\lim\_{N\rightarrow\infty}\mathbb{P}( N^{\frac{1}{2}}|\tau\_{j}-\tau\_{\check{1}}|<C)=1$. We also derive
$\lim\_{N\rightarrow\infty}\mathbb{P}(N^{\delta}|\hat{\tau}\_{tj}^*-\tau\_{j}|<C)=1$ and
$\lim\_{N\rightarrow\infty}\mathbb{P}(N^{\delta}|\hat{\tau }\_{t,\check{1}}^*-\tau\_{\check{1}}|<C)=1$. With $\delta<\frac{1}{2}$, we establish the result in LN 455.
6. We have added an additional case study using the National Supported Work program data(Figure 1 in the attached pdf).
7. For the comparison with various contextual MAB algorithms, we now extend our proposed design to an augmented inverse propensity score weighting (AIPW) estimator incorporating contextual information (Figure 2 in the attached PDF). For the causal tree, the comparison may not be entirely fair since the causal tree identifies subgroups after data collection, whereas our method designs the data collection mechanism to identify the best subgroups accurately. Thus, our approach is expected to have higher accuracy in identifying subgroups. We have provided a comparison (Figure 2 in the attached PDF). The performance of the causal tree model is similar to contextual bandit algorithms, and our proposed algorithm has the highest probability of identifying subgroups.
8. The purity score is similar to correct selection probability, as both measure accuracy: purity score assesses how well clusters contain a single class, while correct selection probability evaluates identifying the best subgroups. Both range from 0 to 1, with higher values indicating better performance. Figure 1(c) (d) in the attached PDF compares the normalized mutual information for our proposed design and three competing methods.
Per your suggestions, we will update our manuscript as follows:
- Add a more thorough literature review on posthoc subgroup analysis
- Add a notation table
- Add additional simulation results, including contextual bandit, AIPW estimator, and normalized mutual information as an additional metric
---
Rebuttal Comment 1.1:
Comment: Thank you for your response. I appreciate the authors' clarification on related works and details in their proof.
However, regarding the new results presented in the rebuttal, I still have the following concerns.
#### The usage of patient covariates
Thanks for the clarification on the differences between the proposed method and RAR and CARA approaches.
It seems that the trial strategy proposed in this paper relies on some conventional covariate-adjusted strategies to generate the initial subgroup division.
However, its robustness with respect to the subgroup initialization is not properly evaluated.
What if the initial subgroups are not correctly aligned to the distribution of treatment responses in a population (which could be common in real-world applications)?
#### Tuning parameter $\\gamma$
The authors mention that the number $\\gamma=0.05$ is a small tuning parameter to ensure that the bootstrap factor $\\Delta<1$ in Eq. (3) and have provided ablation study about $\\gamma$. However, this tuning parameter seems to be useless according to Table 1 provided in the authors' rebuttal. As affirmed by the authors, their proposed procedure is insensitive to $\\gamma$. When $\\gamma=1$, $\\Delta$ could always be equal to 1, which completely discards the proposed bootstrap strategy. This raise further concerns on the novelty of this paper.
#### Extension with augmented IPW.
Comparing Fig. 1 in the main manuscript and Fig. 2(a) in the authors' rebuttal, the inclusion of IPW estimator leads to no obvious improvement in performance.
Why is this happening? Contextual information shall allow more precise treatment allocation during the trial and contributes to faster convergence (e.g., high correct selection probability with fewer stages). Therefore, the improvement mentioned in the authors rebuttal doesn't seem to be effective.
Further, this may suggest that the synthetic dataset used in the experiment is inappropriate for serious performance evaluation.
---
Reply to Comment 1.1.1:
Comment: Thank you for raising additional concerns. Due to space constraints, we hope to break down our response into two comments. The first comment below addresses your first and third concerns.
**Usage of covariates**: To ensure our proposed design is practically relevant, our design does not incorporate any data-driven methods for identifying initial subgroup divisions after the trial has started. This is because regulatory agencies often strongly encourage that subgroups be pre-specified to enhance the interpretability of trial results and prevent data mining during the planning phase of a clinical trial; see some sample RCT designs with pre-fixed subgroups in Murray et al.(2018), Thall et al.,(2003), and Hu et al.,(2015).In accordance with this regulation, our design aims to dynamically merge subgroups if certain subgroups show homogeneous effects (note that merging does not create new divisions of subgroups) and sequentially adjusts treatment assignment probabilities within those merged subgroups to identify the best one efficiently. We do see that adding an initialization stage can be helpful when the initial subgroups are not informative of the treatment effect heterogeneity. As a future direction, we plan to explore the possibility of using tree-based methods to identify subgroups and sequentially merge them.
**Extension with augmented IPW**: Indeed, it may seem counterintuitive that including additional covariates does not significantly improve the empirical performance. This is because our proposed method employs the IPW estimator with estimated propensity scores, which, as justified by Hirano et al. (2003), already attains the semiparametric efficiency bound. Thus, in the supplementary simulation study, adjusting for additional covariates with AIPW does not further improve the empirical performance of the treatment effect estimator, and therefore, the empirical performance of our design remains unchanged. We hope this addresses your concern! Thank you very much for carefully going through our new simulation results.
Reference:
- Thall, P. F., Wathen, J. K., Bekele, B. N., Champlin, R. E., Baker, L. H., and Benjamin, R. S.(2003). Hierarchical bayesian approaches to phase ii trials in diseases with multiple subtypes. Statistics in medicine, 22(5):763–780.
- Murray, T. A., Yuan, Y., Thall, P. F., Elizondo, J. H., and Hofstetter, W. L. (2018). A utility-based design for randomized comparative trials with ordinal outcomes and prognostic subgroups.Biometrics, 74(3):1095–1103
- Hu, J., Zhu, H., and Hu, F. (2015). A unified family of covariate-adjusted response-adaptive designs
based on efficiency and ethics. Journal of the American Statistical Association, 110(509):357–367
- Hirano, K., Imbens, G. W., and Ridder, G. (2003). Efficient estimation of average treatment effects using the estimated propensity score. Econometrica, 71(4):1161–1189.
---
Rebuttal 2:
Comment: The second comment below addresses your second concern:
**Tuning parameter**: We are sorry that the previous presentation of our procedure might have caused some confusion. In the following, we would like to clarify that our bootstrap procedure is not intended to select $\gamma$, and both $\gamma$ and $\Delta$ are only adopted to select the hyperparameter pair $(c\_\texttt{L},c\_\texttt{R})$, which determines the neighborhood for merging subgroups.
First, we will replace lines 159-170 with the following in our revision to clarify our procedure:
- **Dynamic identification of the best subgroups (Algorithm 2)**:The dynamic subgroup identification algorithm involves a resampling step at each stage. Specifically, at each stage $t$, we generate bootstrap samples $\hat{\mathbf{\tau }}\_t^{\circ}$ from a Gaussian distribution centering around $\hat{\mathbf{\tau }}\_t$, which is estimated using the data collected up to stage $t$ using Eq (8). We then identify the best subgroups at Stage $t$ using
\begin{align*}
\hat{\mathcal{T}}\_{t1} = \\{k: w\_{k,(1) }^{\circ} = 1,k = 1,\ldots,m\\}, \ (5)
\end{align*} where $w\_{k,(1)}^{\circ}=\mathbb{1}\\{-c^t\_{\texttt{L}}\cdot N\_t^{-\delta}\cdot \hat{\mathbb{V}}\_{t,( 1)}^{\delta}\leqslant (\hat{\tau}\_{tk}^{\circ}-\hat{\tau}\_{t,(1) }^{\circ})\leqslant c^t\_{\texttt{R}}\cdot N\_t^{-\delta}\cdot \hat{\mathbb{V}}\_{t,(1)}^{\delta }\\}$. $N_t = \sum\_{s=1}^t n\_s$ and $\delta = 0.25$. Note that the above formulation relies on a pair of hyperparameters $(c^t\_{\texttt{L}},c^t\_{\texttt{R}})$, which are selected data-adaptively. In what follows, we shall illustrate the algorithm for selecting hyperparameters.
- **Hyperparameter selection (Algorithm 3)**: In line 7 of Algorithm 3, we adopt a bootstrap method and propose several alternative bootstrap methods in Algorithm 3(line 7) in Appendix (Section H).Algorithm 3 involves a resampling step that generates bootstrap samples $\hat{\mathbf{\tau}}\_t^*$ from a Gaussian distribution centering around $\mathbf{\tau}\_t^*$ at Stage 1. In line 2, we compute $\mathbf{\tau}\_t^*=(\tau\_{t1}^*,\ldots,\tau\_{tm\_t^*}^{* })^{\prime}$ as \begin{align*} \tau\_{tj}^*=\Delta\_t \cdot \frac{\sum\_{j=1}^{m\_t^*}\hat\tau\_{tj}}{m\_t^*}+(1-\Delta\_t) \cdot \hat\tau\_{tj}, \ j = 1,\ldots,m, \ (3) \end{align*} where $\Delta\_t =\min\\{0.99,\frac{\sum\_{j=1}^{m\_t^*}\hat{\mathbb{V}}\_{tj}}{ N\_t\sum\_{j=1}^{m\_t^*}(\hat{\tau}\_{tj}-\overline{\hat{\tau}}\_t)^{2}}\times N\_t^{\gamma}\\}$ and $\gamma \in (0,0.2)$. We choose $\gamma = 0.05$ in our simulation studies, and our procedure is shown not sensitive to $\gamma <1$. In line 8, we compute $\hat{\mathbf{\tau}}\_t^*=(\hat{\tau}\_{t1}^*,\ldots,\hat{\tau}\_{tm\_t^*}^*)^{\prime}$ at Stage $t$ for $t>1$ as
\begin{align*}
\hat{\tau}\_{tj}^*=\Delta\_t \cdot \frac{\sum\_{j=1}^{m\_t^*}\hat{\tau}\_{tj}^{\circ}}{m\_t^*}+(1-\Delta\_t) \cdot \hat{\tau }\_{tj}^{\circ}, \ (4)
\end{align*} where $\hat{\tau}\_{tj}^{\circ}$ is computed with the bootstrap samples as in Eq (8).
Through this revision, we hope to clarify that
- The final tie set is selected using Eq (5), not depending on $\Delta$ and $\gamma$.
- You are absolutely correct that $\Delta$ cannot be 1; we will revise our algorithm to put a lower bound of 0.99. Additionally, there was a typo in Table 1; the magnitude of $\gamma$ we have tested is actually from $0.00$ to $0.10$. We hope to provide additional simulation results below. Due to time limit, we are only able to provide results for reduced sample size with $n_t = 400$, $T=4$ and reduced $B=400$ on the correct selection probability, Monte Carlo bias, and variance for a various choice of $\gamma$:
- $\gamma = 0.05$, CSP: $0.68, 0.81, 0.85, 0.86$; $\sqrt{N}$Bias: $35.20$; SD: $44.71$;
- $\gamma = 0.10$, CSP: $0.68, 0.81, 0.85, 0.86$; $\sqrt{N}$Bias: $35.20$; SD: $44.71$;
- $\gamma = 0.15$,CSP: $0.67, 0.81, 0.86, 0.86$; $\sqrt{N}$Bias: $35.21$; SD: $44.72$;
- $\gamma = 0.20$, CSP: $0.68, 0.81, 0.86, 0.86$; $\sqrt{N}$Bias: $34.42$; SD: $44.75$.
The performance of our method in this response may not be as strong as the results in the submitted manuscript due to time constraints in addressing your concerns. The results still demonstrate that our method is not sensitive to the choice of $\gamma$.
We hope this revision of our manuscript will address your concerns!
---
Rebuttal Comment 2.1:
Comment: I appreciate the authors' additional rebuttal which has addressed my concerns on subgroup initialization and the contribution of AIPW.
#### More effective baselines
According to the authors' response, a more reasonable baseline should be contextual bandits with AIPW estimation as feature maps for treatment assignment (and using clustering algorithms like K-means or agglomerative clustering to merge subgroups with similar treatment responses as desired in this paper). The current baselines seem to be quite weak.
#### Tuning parameter
I am not fully convinced by the authors' response.
- **Tie set selection**: The tie set selection in Eq. (5) is dependent of $\\hat{\\tau}^{o}_{tk}$ which is computed with the bootstrap samples as clarified by the authors. This implies that the tie set selection is in fact affected by $\\Delta$ and $\\gamma$.
- **Upper bound of $\\Delta$**: Setting the upper bound of $\\Delta$ to 0.99 is kind of arbitrary and lacks rigor.
- **Typo in new results**: I appreciate the new results by the authors. However, it is difficult to verify whether the range of parameter $\\gamma$ reported in Table 1 of the rebuttal is a typo.
Therefore, I can only increase my rating of this paper from 3 to 4.
---
Reply to Comment 2.1.1:
Comment: Thank you for your reply and recognition. We regret that there are still concerns regarding our paper.
**More Effective Baselines**
Thank you for suggesting an additional baseline. We will include a comparison with this baseline in the revised manuscript.
**Tuning Parameter**
- **Tie Set Selection:** Our hyperparameter selection process is independent of the dynamic identification of the best subgroups. Specifically, there are two separate bootstrap procedures: one for hyperparameter selection and another for dynamic identification. In both procedures, we compute $\hat{\tau }\_{tj}^{\circ}$ with bootstrap samples independently. For hyperparameter selection, $\hat{\tau }\_{tj}^{\circ}$, $\gamma$, and $\Delta$ are used to calculate $\hat{\tau }\_{tj}^*$ and select the hyperparameter pair $(c\_\texttt{L},c\_\texttt{R})$. For dynamic identification, $\hat{\tau }\_{tj}^{\circ}$ is used to identify the tie set in Eq (5), and this process does not depend on $\gamma$ and $\Delta$.
- **Upper Bound of $\Delta$:** As previously mentioned, $\gamma$ is a small tuning parameter to ensure $\Delta < 1$. We have set the upper bound at 0.99 as a precautionary measure.
- **Typo in New Results:** We apologize for the typo in Table 1. To address this issue, we are prepared to provide the code for the simulation to clarify any discrepancies. | Summary: The paper introduces a dynamic subgroup identification strategy within the framework of covariate-adjusted response-adaptive randomization, addressing the need for more nuanced subgroup analysis in clinical trials. This strategy aims to optimize treatment allocation dynamically and identify subgroups that demonstrate significant treatment effects, which is crucial in practice.
Strengths: 1. The method is highly relevant to modern clinical trial designs, where there is a pressing need to identify patient subgroups with differential treatment responses efficiently.
2. The paper is strong in theoretical development, providing rigorous proofs and formulations that demonstrate the statistical validity and efficiency of the estimator for the best subgroup treatment effect.
3. The approach looks novel, from my perspective.
Weaknesses: 1. I understand the paper is a very technical paper, and the authors have spent a lot of efforts in making the paper easy to follow. My minor comment is that maybe providing more intuition on the reasons why the complicated design is needed will be helpful. For example, why we need resampling and bootstrap? What kind of technical challenge bootstrap can help to overcome?
2. The objective function in line 128 seems a little mismatched with the objective of best subgroup identification. For example, if there are two arms in $\mathcal{T}_1$ with variance zero and very large respectively, it seems that optimizing the objective in line 128 will lead to the solution that we forget about the super large one and devote most efforts to the one with variance 0. Thus, I am thinking about whether there is some analysis on the probability that we successfully identify $\mathcal{T}_1$? More specifically, can we always guarantee we identify everything in $\mathcal{T}_1$ and we never include something out of $\mathcal{T}_1$?
Technical Quality: 3
Clarity: 3
Questions for Authors: See previous comments.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: See previous comments.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: - Thank you for your encouragement. Below, we provide our understanding of why adaptive designs can be preferable in identifying the best subgroups compared to other alternatives, why resampling and bootstrapping are critical in our procedure, and what technical challenges bootstrapping addresses.
- To illustrate why adaptive designs can be preferable in identifying the best subgroups, we first review the literature on identifying subgroups and put our method into perspective. There are two lines of literature: The first line conducts a post-hoc analysis of already collected data from previous studies, while the second line focuses on adaptively collecting new data via randomized experiments to ascertain the best subgroup. Therefore, designing an adaptive data collection mechanism is the focus of the second line of literature, and our method aligns with the second line. We follow the second line for subgroup identification because While the first line of literature adopts post hoc subgroup analyses that do not require new data collection, they rely on untestable causal assumptions, limiting the credibility of causal conclusions (e.g., the unconfoundedness assumption is necessary for causal inference in observational studies, but unmeasured confounders can compromise these conclusions). Conversely, in randomized experiments, valid causal conclusions do not depend on such assumptions, but analysis of subgroup treatment effects may still face biases like the winner's curse, mainly if heterogeneous effects are selected from the data in an ad hoc manner (Guo and He, 2021;Andrews et al., 2019). Our approach aligns with the second line of literature and involves collecting data through randomized experiments to identify subgroups that benefit most from the treatment accurately. This approach is preferable because it avoids the limitations of analyzing existing data, which is the case with first-line approaches. It ensures that treatments are randomly assigned, thus eliminating the influence of unmeasured confounders and allowing for robust, valid causal inferences without the need for untestable assumptions—distinctly superior to using observational data. Moreover, our adaptive design method permits iterative adjustments based on new information, enhancing trial flexibility and efficiency. We also hope to gently note that we are not deliberately constructing a complex adaptive experimental design strategy. Given randomized experiments can be time-consuming, costly, and potentially result in adverse patient outcomes if unsuccessful, our approach aims to efficiently allocate experimental resources within a limited budget to identify the most beneficial subgroup.
- Bootstrap and resampling-based methods play a crucial role when dealing with subgroups that have tied effect sizes in the population. To shed light on this issue, we consider a more straightforward scenario without adaptive data collection. In this simplified scenario, our objective is to identify the subgroup clusters (referred to as "tie sets" in our paper) and rank these clusters according to their average effect sizes with statistical confidence. Suppose we have in total $p$ subgroups forming $m$ clusters, and their population effect sizes and the averaged cluster effect sizes are $\underbrace{\beta\_1= \ldots =\beta\_k}\_{\text{Cluster 1}}, \beta\_{k+1}, \ldots, \beta\_{l-1},\underbrace{\beta\_{l} = \ldots = \beta\_{p}}\_{\text{Cluster m}}$, $\alpha\_{j} = \sum\_{l\in \text{Cluster j}} w\_l \beta\_l, \sum\_{l\in \text{Cluster j}} w\_l = 1, j = 1, \ldots, m$. Then, the bootstrap procedure proposed in our method helps us to achieve the following two goals simultaneously: Provide valid statistical inference (confidence intervals and consistent point estimates) on the ``ordered" averaged cluster effect sizes, that are $\alpha\_{(1)},\alpha\_{(2)}, \ldots, \alpha\_{(m)}$; Identify the clusters with a high probability. While numerous methods exist for subject clustering, providing confidence intervals for their ordered estimated mean effect sizes is a challenging task due to the winner's curse bias. This issue is well-known in the existing literature, where constructing confidence intervals for order statistics is particularly difficult. Therefore, our bootstrap procedure is important in identifying the tie set while delivering valid statistical inference on the identified best set of subgroups.
- Thank you for your question. When there are two arms with the same treatment effects but different variances, our algorithm can effectively manage the allocation efforts, avoiding the scenario when most efforts are allocated to the arm with zero variance. We shall illustrate this in a two-stage adaptive experiment. At the start of Stage 1, we adopt the same treatment allocation for all subgroups. At the end of Stage 1, if there are two arms (or subgroups) in $\mathcal{T}\_1$ with similar treatment effects but one with zero variance and the other with large variance, our algorithm will be able to identify these two competing subgroups and then merge these two subgroups into the best "set" $\hat{\mathcal{T}}\_1$. In Stage 2, since both of these two subgroups belong to $\hat{\mathcal{T}}\_1$, they will be assigned the same treatment allocation, denoted as $\hat{e}\_1^*$. Additionally, Theorem 1 in our manuscript provides an analysis of the probability of successfully identifying $\mathcal{T}\_1$. We show that as the sample size tends to infinity, we can always correctly identify the best set of subgroups. Therefore, we can guarantee the identification of everything in $\mathcal{T}\_1$ and ensure that nothing out of $\mathcal{T}\_1$ is included.
Reference:
- Andrews, I., Kitagawa, T., and McCloskey, A. (2019). Inference on winners. Technical report, National Bureau of Economic Research
- Guo, X. and He, X. (2021). Inference on selected subgroups in clinical trials. Journal of the American Statistical Association, 116(535):1498–1506.
---
Rebuttal Comment 1.1:
Comment: I really appreciate the authors efforts in clarifying my concerns, which are very helpful. Thanks! | Rebuttal 1:
Rebuttal: We want to thank all of our reviewers for their very insightful suggestions and comments. We have made our best efforts to address the questions and comments raised by our reviewers.
To supplement our simulation studies and to provide additional information in response to some questions, we provide a pdf file which includes the following figures and a table:
- Figure 1: Comparison of the correct selection probability and normalized mutual information among three conventional methods and our proposed design strategy based on an additional dataset.
- Table 1: Simulation results that demonstrate the insensitivity of our method to the choice of tuning parameters.
- Figure 2: Comparison of the correct selection probability and normalized mutual information among causal tree model, complete randomization with AIPW estimator, contextual bandit algorithms including epsilon greedy algorithm and upper confidence bound 1 algorithm with AIPW estimator, and our proposed design with IPW estimator and AIPW estimator.
We greatly appreciate the reviewers for taking the time and effort to provide their valuable feedback.
Pdf: /pdf/81de7154a51bfeed30ed635503cb3350a8d7704b.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
The High Line: Exact Risk and Learning Rate Curves of Stochastic Adaptive Learning Rate Algorithms | Accept (poster) | Summary: The paper is a continuation of a broad line of work exploring deterministic limits of stochastic gradient descent. In particular, the author(s) follow one specific branch of the many, which derives a solution to a coupled integro-differential equation by passing on the complex space. The novelty lies mainly in the fact that they derive, for least squares, deterministic limits of dynamics that have adaptive step-sizes, for non identity covariance of the observed dataset. Industrially, this is important as the idealized scenario of fixed step-sizes is __very much__ idealized, and realistic datasets have structure. Thanks to the general expressions (under 6 assumptions that are claimed to be not super-restrictive), they can proceed to study the various differences between standard schedules. The limiting learning rate can be derived, as well as the scaling of the risk along time. This is done for Line Search, Polyak step-size, AdaGrad Norm. Interestingly, their bounds suggest that for covariances with largely separated eigenvalues (say, power law spectrum), there should be a change in the phenomenology (i.e. a phase transition). They verify this for a power law covariance, which exhibits three phases depending on the parameters of the problem (notably the starting distance from $X^{\star}$). All is accompanied by experiments corroborating the predictions.
Strengths: The motivation in lines 17-31 is very well stated. As a reader, I would be inclined to continue parsing the paper.
- Proposition 4.3 states that with a null initialization $X_0 = 0$ and subject to a condition on the covariance (which roughly says that $\lambda_{\min}$ can be small but that there is a very small number of vanishing eigenvalues), if the ground truth signal is spread out, then the learning rate is never zero. For symmetric activations (say, phase retrieval), and constant learning rate, I think you cannot really start at zero because you have a symmetry, you need at least a tiny bit of overlap. From this result, it appears that you can start from the most useless guess and still get away with it, provably. Interesting.
- The higher arching aim of the work is very important, as knowing that the behavior of SGD is predictable is pivotal for understanding how our models work.
- The techniques are of independent interest and are a continuation of a line of work that is systematically, step by step, developing a global answer to these questions.
- The result that Polyak's step-size converges to the inverse of the average eigenvalue is yet another validation of a method originated from convex optimization holding in more generality. This is reassuring.
Weaknesses: Weaknesses are questions and questions are weakenesses in some sense. Once the author(s) engage, I will reorder things that get answered and things that remain as "concerns".
- Proposition 4.4 supposes that $\mathscr{D}_i^2(0)$ could potentially have a different scaling at each site $i$. I understand that the assumption includes an uninformative initialization $\delta = 0$. However, when is it really that one can think of a setting in which you can have an initialization that scales like $\lambda_i^{-\delta}$ for non trivial $\delta$? The result is very aesthetic, but I did not think through of its realistic value. Anyways, it is very nice so I am __not__ criticizing the math. Maybe I am just interested in understanding if you have in mind something.
- In the Checklist, you state that you have been "__very careful__ \[...\] to include __all__ the assumptions needed". I agree with this statement. However, as a reader, I do not agree with this being the limitation of your work, as in some sense you claim over the whole text that your assumptions cover standard adaptive step-sizes (hence you hiddenly claim that this theory is all we need). I would be very happy to see a paragraph where you clearly state your limitations. Where does your theory fail? Where does the technique __not__ extend naturally? If you want to sell me your paper, I do not believe its value is in the assumptions. On a first aspect, the ideas and the techniques are of independent interest. Secondly, it would be sad if the bottleneck were the assumptions.
- Your equation (7) is found across some other works. One of the earliest appearances is (Yoshida and Okada, 2019). However, from a first check, I would guess that you are missing a term on the first object. To be fair, I quickly tried to re-derive it and I would guess that you need a further $H_{2, t}\mathscr{V}_{12, i}(t)$ in the first expression. Am I wrong? If so, please correct me. It looks like the equations are missing some symmetry of the dynamics.
- I am concerned about the scaling. My understanding of the literature is that scalings are very important. Your assumptions require that $\lVert K\rVert$ is bounded independently of the dimension and that $\lVert X_0\rVert$ is bounded, independently of the dimension. Moreover, your step-size is $\frac{1}{d}\mathfrak{g}_k$. Therefore, at the beginning you move by very little, I think by $\propto \frac{1}{d}$. What is done in some other references is very different. Let me take an example, and correct me if I am wrong.
1. You can derive heuristically deterministic dynamics for step-size $\mathfrak{g}_k$ (not normalized), starting from a standard Gaussian standard gaussian vector $X_0$, with data-points being i.i.d. Gaussians $a\sim \mathcal{N}\left(0,\frac{1}{d}I_d\right)$. Now, the step-size is not divided by dimension, the $X_0\sim \mathcal{N}(0, I_d)$ is not bounded in norm by something independent of $d$, and the $a$ matches your assumptions. In this case however, the signal is far stronger, as we are removing two normalizations you indeed have. Below my questions regarding this.
2. Can you clarify why it makes sense to choose your scaling? Apart from it allowing the proofs.
3. In particular, how is it not possible to allow for norms dependent on $d$, and in parallel, how is it that the signal is enough to be not stuck at the initialization?
4. Does this implicitly mean that you are exploring a slow regime? Say, a Gradient Flow? What happens if in your experiments you just change the scalings to something larger in magnitude?
#### Typos
__NOTE__: I am including for completeness typos in "weaknesses". Please do not count them as such.
- (line 560) In the reminder of $\Omega$, you are missing a $\max$.
- (equation 20) there is no $\Gamma(t)$ in the expression, right?
- (lines 1140-1141) the way of quoting "noise" and "variance" is not the correct one, TeX-wise.
Technical Quality: 3
Clarity: 3
Questions for Authors: Please note that some of the questions below are very ingenuous.
- (lines 49-57) This is more of an impression. Why is $\lambda_{\min}(K) > C > 0$ for you a situation of strong anisotropy? The power-law case is anisotropic in that the minimum eigenvalue is not bounded below by a constant as $d\to\infty$. Put another way, I do not understand how the __minumum__ eigenvalue being bounded below from zero would describe the whole behavior of the covariance profile of the data-points $a$. I understand that this might be related to the fact that the optimal strategy uses the trace so you say something in the line of "if the minimum eigenvalue is very well distant from the average then there is anisotropy". At the very least, it appeared to me as a reader that you somehow __define__ strong anisotropy as this eigenvalue condition, while in fact it is a consequence of the algorithmic behavior that you can tune the eigenvalues in this "anisotropic" way and observe that "greed can be arbitrarily bad".
- The original introduction of the Polyak step-size is that of a step-size that uses the optimal value of the function, and the value of the current iterate. In your case, what you call Polyak step-size has a connection with the original method, which is related to your assumptions, as you find that eventually it gets to a $\gamma_{\infty}$ that takes care of the geometry of the function. From a pedagogical point of view, it would be nice to comment further on this.
- Theorem 2.1 states that the concentration is for any $\epsilon \in \left(0,\frac{1}{2}\right)$, line 48 says "for any inverse power of d", and Theorem B.1, Corollary B.1 cited below Theorem 3.1 in line 195 give a result for existance of an $\epsilon$ such that the approximate solution of the integro-differential equation is verified. Can you shed light on these differences? Since it is your main Theorem, it would be nice to have an explicit discussion of what goes on where. More in detail, why do you get from existance in the appendix to a whole interval of $\epsilon$ (that by the way gives a rather slow decay wrt to say $\epsilon > 1$), to claim in text "for any inverse power"? I have to be honest here and I did not spend time re-doing all the proofs.
- Related to the point above, in equation (27), your supremum is over $T\wedge \hat{\tau}_M$. In line 531 you state that you will not specify which stopping time it is when it is "clear from context", but for (27), given that you want to prove (26), my natural question is if stability is for $\hat{\tau}_M(S)$ or $\hat{\tau}_M(\mathscr{S})$, as you only mention one. At the very least, it is not clear from context here which stopping time it is in (27).
- Lemma C.1 requires as condition that $S(t, z)$ is an approximate solution over the $\xi$-mesh. How do you guarantee that this condition holds in your case? If ever, it is not evident from text that this is the case. If it is a direct consequence of arguments in reference 14, I would appreciate it being made explicit.
- Can you clarify the sentence of lines 239-240? Ignore the "sufficiently anisotropic" aspect which I already discussed above. The idea that smaller learning rates lead to under-performance is not entirely true. In the simplest setting if you over-shoot with your learning rate you will never converge. I believe here this is tightly linked to your Table 1, but there line search and Polyak have the same convergence rate for the risk. Where is the difference?
- In Figure 1, how many runs do you make? It would be nice to know.
- The same question above but for any other plot where you show error bars.
- (line 1125) You choose as scaling for the initialization $X_0\sim\mathcal{N}\left(0,\frac{I_d}{\sqrt{d}}\right)$ and the ground truth $X^{\star} = \frac{1}{\sqrt{d}}\mathbf{1}_d$. The norm of the latter is $1$, but the former has norm concentrating at $\sqrt{\sqrt{d}}$, and this violates your assumptions (not bounded by a constant independent of $d$). Is it a typo? Or is it that the experiment does not match the assumptions? If I look at the scaling of Figure 3, line 1099, you indeed sample $X_0\sim \mathcal{N}\left(0,\frac{I_d}{d}\right)$.
- Proposition 4.4 is nice. What happens when $\beta +\delta > 2$?
My soundness score is due to the many questions. I hope I will raise it after the discussion. The overall score follows the same principle.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 4
Limitations: None.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their positive review and careful reading of our paper, including the appendix. We address below all of the reviewer’s comments. *Due to space limitations, we will include an additional "Official Comment" to answer all the reviewer's questions.*
**Practical application of power law scaling on $\\mathscr{D}\_i^2(0)$.** First, it is important to note that this power law scaling is not on the initialization but rather on the distance from the initialization to the ground truth (signal) $X^\\star$. If one initializes at 0, which is reasonable to do in the least squares setting, then this is a power law scaling on the ground truth. In settings where the data has power law covariance, one can also see power law scaling in the ground truth. For more details on this, see, for example, reference \[56\] mentioned in the paper, particularly their discussion of scaling laws in section 6\.
**Limitations of the paper.** Added a “Limitations” paragraph. For the version going in the main paper, see “All Reviewers Rebuttal”.
**Coupled ODEs in Eq (7).** Thank you for pointing out the potential issue with Equation (7). You are correct that there was an oversight in the first term of Eq(7). We have revised the equation to include the missing term and corrected ODEs. Page 17 contains the correct ODEs for S and reflects the updated formulation. We appreciate your attention to detail and the opportunity to correct this error.
**Scalings.** Thank you for the questions. Please see the discussion for all reviewers, where we discuss other scalings beside $(1/d)$.
You also asked about whether a learning rate on the scale of $1/d$ is large enough for the algorithm to really move away from initialization. The key here is that we work in the regime where the number of iterations is proportional to dimension. Thus, even with learning rate of size $1/d$, one sees significant movement after order $d$ iterations. Indeed, for examples such as linear or logistic regression, this learning rate yields convergence of the risk in order $d$ iterations.
You are correct that, in certain cases (e.g. phase retrieval with a cold start), the algorithm can become stuck at initialization and require more than order $d$ iterations to converge. This is also captured in our result. The theorem is not a statement about the convergence of the algorithm to an optimal solution, but rather about the convergence of the SGD dynamics to a deterministic curve. Thus, our theorem can accurately predict the risk trajectory for order $d$ iterations, even when the risk does not converge to zero in that time. We’ll add that to capture the escape of phase retrieval from the high-dimensional saddle $O(d \log d)$ iterations are needed, and moreover the high-dimensional limit of the risk curve will contain non-concentrating and non-vanishing effects of SGD noise (this is implicit in Tan and Vershynin. *Online Stochastic Gradient Descent with Arbitrary Initialization Solves Non-smooth, Non-convex Phase Retrieval. 2019*).
Finally, you asked about our choice to bound the norm of $X$, independent of $d$. Because we are interested in attaining a high-dimensional limit of the dynamics, we need to scale things in such a way that the risk is dimension-independent. In our set-up the risk is always expressible as a (generally non-linear) function of the inner product $X^Ta$, so we need this inner product to be dimension-independent. Our choice of scaling achieves this (although one could also, for example, simultaneously adjust the scaling of $X$ and $a$ to achieve an equivalent result). There are, as you suggest, interesting problems in which the norm of $X$ may grow with $d$, but those are beyond the scope of this paper.
**Typos.** Thank you for pointing out these things. *We corrected them.*
---
Rebuttal Comment 1.1:
Title: (Response to questions)
Comment: **Answers to the reviewers’ questions**
* **Strong anisotropy.** When we say “strong anisotropy” we are not referring to the lower bound on the eigenvalues. Rather, we refer to context where, for fixed $\\lambda\_{\\min}$, we have $\\lambda\_{\\min}/(\\text{Tr}(K^2)/d)\\ll (\\text{Tr}(K)/d)^{-1}$ or in other words, $\\text{Tr}(K)/d\\ll\\text{Tr}(K^2)/d$. This is in contrast to the isotropic case where $\\text{Tr}(K)/d=\\text{Tr}(K^2)/d$. *We can make this more clear by writing $\\text{Tr}(K)/d\\ll\\text{Tr}(K^2)/d$ in parentheses after the phrase “strong anisotropy.”*
* **Polyak stepsize.** Indeed, we are using the derivation of Polyak stepsize (see, e.g., \[1\]) as motivation to construct what should be the “idealized” stochastic version and not the $( R(X\_k)-R(X^\*) ) / ||\\nabla R(X\_k)||^2$. We will add commentary about the derivation of the Polyak step size in the paper and how it deviates from the classical Polyak stepsize. We appreciate the reviewer pointing this out.
\[1\] Elad Hazan and Sham Kakade. “Revisiting the Polyak Step Size”.
* **Concentration as an inverse power of d.** The concentration for the distance from SGD statistics to their deterministic equivalent is indeed $d^{-\\varepsilon}$ for $0\<\\varepsilon\<1/2$, so the best bound we obtain for this concentration is slightly worse than $1/\\sqrt{d}$. The comment in line 48 is that the concentration occurs with **probability** better than any inverse power of d. In other words, the probability that our statistic will be less than $d^{-\\varepsilon}$ is at least $1-d^{-C}$ for arbitrarily large C. In the theorem, we describe this more succinctly in the phrase “with overwhelming probability.” However, the definition of overwhelming probability is not introduced until line 74, so we do not use it in the comment on line 48\.
* **Stopping times.** This is indeed unclear, and thank you for pointing it out. It is meant to be the minimum of the two stopping times, as in the preceding equation and in the Proposition that is cited from \[14\]. *We can clarify this in the updated version of the paper.*
* **Verifying $S(t,z)$ is an approximate solution.** Indeed, this is a very important condition to verify, and doing so is the primary purpose of Section C.2. The way in which this is utilized with regard to the mesh is briefly described in Section C.1.3 where we prove Proposition C.1 (pending the details that are in Section C.2). *Based on your question, we can add a sentence at the location that you referenced summarizing the flow of the proof and where these details can be found.*
---
Rebuttal 2:
Comment: Acknowledged, see full comment below. I believe you are missing the last three questions (one might be a typo therefore I ping you about this).
---
Rebuttal Comment 2.1:
Comment: Oops! Yes, we forgot to add the responses to the last questions. Sorry!
* **Comparison of Line search and Polyak risk convergence rate.** As you have observed, Table 1 indicates that the convergence rates for the risk in line search and Polyak have the same formula as a function of the limiting learning rate, $\\gamma\_{\\infty}$. The key difference is in the value of $\\gamma\_{\\infty}$ in these two cases. For data that is strongly anisotropic in the sense described above, the value of $\\gamma\_{\\infty}$ can be much smaller for line search than for Polyak, thus yielding slower convergence.
* **Number of runs for simulations.** The number of runs for the simulations, as well as other implementation details, are provided in Appendix H. Due to space constraints, we could not include these details in the main body of the paper.
* **Scaling of initialization in line 1125\.** This is indeed a typo, and thank you for pointing it out. *We corrected it.*
* **What happens when $\\beta+\\delta\>2$.** This is a natural question and thank you for asking. We do not address the case of $\\beta+\\delta\>2$ because it represents a setting in which the high-dimensional problem devolves to a finite-dimensional one where our scalings no longer really make sense. This is perhaps easiest to see in our formulation of the power law set-up for the covariance, where we take the spectrum of $K$ to approach a continuous density function, supported on $(0,1)$, with unbounded density near 0\. This density is only well-defined for $0\<\\beta\<1$. Similarly, one can view the power-law set-up for $\\mathscr{D}^2\_i(0)$ giving the distribution of the projections of $(X\_0-X^{\\star})^2$ in the eigenvector directions. This distribution approaches a well-defined, continuous density function when $0\<\\delta\<1$. Thus, the limits that we consider cease to make sense when $\\beta+\\delta\>2$.
One can also see that the proofs break down when $\\beta+\\delta\>2$. Our analysis of Adagrad-Norm in the power-law setting (see Section D.2.3) relies upon the fact that the deterministic limit of the risk can be expressed as a convolution Volterra equation (see (67)). Under the assumption that the forcing function F has power law decay (see Assumption 8), we can derive the asymptotics of the learning rate and the risk for various regimes, as displayed in Table 1\. Assumption 8 is satisfied by our power-law set-up, provided that $\\beta+\\delta\<2$ (see Lemma D.5).
---
Rebuttal 3:
Title: General Response
Comment: Dear author(s),
thank you for your general rebuttal and for the rebuttal concerning my specific comments. I understand all your points, and after the corrections/specifications you put, I will raise my score. Some quick points:
- strong anisotropy and verifying the approximate solution: you comments in italics are necessary in my opinion.
- Polyak comment: yes, please.
- scalings comment: see one of the questions missing (at the very end) in my long list to close this aspect.
- all other typos: thank you for fixing them.
last comment: This is a very deep paper, maybe a journal deserves it.
My score is higher. I hope the other reviewers will raise their very low grades. Some scores, in their very description, are too harsh with respect to the actual review.
My score reflects my opinion on the exact description provided by Neurips: moderate-to-high impact. I stand with the idea that this paper is definitely not a 3.
Good luck!!
---
Rebuttal Comment 3.1:
Title: Regarding gradient flow, different scalings
Comment: Dear reviewer,
We want to address the question you re-raised, which we didn't completely answer:
> I am concerned about the scaling. My understanding of the literature is that scalings are very important. Your assumptions require that is bounded independently of the dimension and that is bounded, independently of the dimension. Moreover, your step-size is ...
> What is done in some other references is very different. Let me take an example, and correct me if I am wrong.
Let's take the example you posed. The steps of SGD look like
$$ X_{k+1} = X_k - \eta a_{k+1} f'\left( \langle a_{k+1}, X_k \rangle \right) $$
Now you put $a_{k+1}$ to be scaled to have norm $O(1)$ and $X_{0}$ to have norm $O(\sqrt{d}))$. This is not our setup, but it is possible to change variables to put it in our setup.
Let $Y_{k} = X_{k}/\sqrt{d}$ and let $b = a \sqrt{d}$. Now in these variables, the equation above becomes
$$ Y_{k+1} = Y_k - \frac{\eta}{d} b_{k+1} f'\left( \langle b_{k+1}, Y_k \rangle \right) $$
So indeed, in the setup that you proposed, it would be correct to take $\eta = O(1)$, and it can be equivalently represented in our setup with stepsize scaling like $O(1/d)$. What's important in both cases is the scale of the inner product $\langle b_{k+1}, Y_k \rangle$ is $O(1)$.
So our setup is not gradient flow, and the (not-degenerate) high-dimensional limit for the scaling you propose is the same as ours.
By the way, we'd be happy to look at any literature involving the scalings that you propose, so that we can make a better comparison with existing literature.
Please let us know if this answers all your questions regarding this setup.
Thanks!! | Summary: Analyzes the behavior of AdaGrad-Norm, Line-search and Polyak step size using ODEs.
Strengths: This paper provides an interesting characterization of the behavior of several optimization methods using ODE tools, which are not as heavily used in optimization theory as they should be. This work seems technically sound, although I didn't review the (long) Appendix.
I am in favor of acceptance, although with reservations. The methods studied are interesting to theoreticians, but they are rarely if ever used in practice, which limits the practical appeal of this work. Especially the idealized special cases for the line search and Polyak step size. The focus on quadratic problems is also a major limitation, especially since the analyzed methods are not typically used on quadratic problems.
There is definitely potential for followup work to build on this work.
Weaknesses: - Paper spends a lot of time setting up a general framework, which it then applies to specific settings. The amount of notation introduced seems over the top for a conference paper. For instance, the results for least-squares can be presented without establishing the full general framework, which could be left to the Appendix. Section 1.2 and 2 are just .... a lot to get through.
- Paper ends very abruptly.
- Some plot fonts are too small to read when printed.
- Further discussion of the implications and potential applications would be good.
Technical Quality: 4
Clarity: 2
Questions for Authors: See above.
Confidence: 2
Soundness: 4
Presentation: 2
Contribution: 2
Limitations: See above
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their support of our paper. We answer some of the reviewer’s reservations below.
**“The methods studied are interesting to theoreticians, but they are rarely if ever used in practice, which limits the practical appeal of this work. Especially the idealized special cases for the line search and Polyak step size. “**
We agree with the reviewer that the algorithms studied are idealized and/or not widely used in practice. However, traditional analysis of adaptive stochastic learning rate algorithms (e.g., idealized line search/Polyak, etc.) all give the same rates as the ones used in practice. The traditional theory does not distinguish between “good” practical algorithms and “bad” practical algorithms. This suggests we need a new type of analysis. Part of our goal was to provide such a new framework. While we only analyze “idealized algorithms”, we now understand problems for which these “idealized algorithms” do not perform well. This hopefully illuminates and begins closing the gap between practice and theory.
In future work, we want to derive the dynamics of more practical algorithms. We strongly believe this framework might explain theoretically why certain often used algorithms in practice perform better.
**Focus on quadratic problems is a major limitation, especially since the analyzed methods are not typically used on quadratic problems.**
We agree with the reviewer that “quadratic problems” are idealized, and we want in future work to extend the analysis of these algorithms to more complex losses. On the other hand, many of the algorithms were designed by approximating the “complex” loss with a quadratic. Moreover, it is surprising that we do not even know how these algorithms perform on these quadratics. In our opinion, we should understand this simple loss in detail first.
This said, our analysis extends beyond quadratics to include generalized linear models such as logistic regression and more. Moreover, we do provide some analysis for other losses beyond quadratics. For strongly convex problems (not necessarily quadratic) we find a bound on the limiting learning rate, see Section D1. Also, Figure 1 shows our predictions matching the iterates of SGD with AdaGrad Norm for binary logistic regression.
**Response to Weaknesses**
* **Too much set-up in the main paper.** Thank you for your suggestions. In the updated version of the paper, we plan to lighten the notation and move the least square results upfront; see our reply to all reviewers.
* **Plot fonts too small.** Thank you for pointing that out, in the next version of the paper we will update the figures and make the font larger. See the attached PDF with the revised figures.
* **Paper ends abruptly.** Added a “Conclusion” paragraph. See comment to “All Reviewers” for the specific paragraph.
* **Discussion of implications and potential applications.** Added a “Limitations” paragraph. The conclusion paragraph contains potential applications. See comment to “All Reviewers” for the specific paragraph. | Summary: ### Update after rebuttal
After reading the feedback carefully, I updated my score for the paper.
### Original review
In this work, the authors study SGD and its adaptive variants in the setting of noisy generalized linear models. Assuming the covariance matrix of the data is bounded by a dimension-independent constant and a few more similar assumptions, the authors establish that the expected risk of SGD and the expectation of adaptive learning rates both converge to deterministic curves described by an ODE.
I found the paper rather hard to follow and I couldn't understand the significance of the obtained results. The theory requires a lot of simplifications: the prediction model is linear; the learning rates that are studied are "idealized", so they are not the same as those used in practice; Adagrad-norm isn't really a method popular in practice; the ODE solution is not trivial in the case of least squares, as equation (11) is implicit, so it requires further assumptions to make a statement about the solution. I did not understand what insight we can draw from all of this.
Strengths: 1. The theory reveals that line search leads to slower convergence that Polyak stepsize.
2. In some simplified cases, there exist exact expressions that describe the dynamic of the studied methods.
3. The theory is also supported by numerical simulations.
Weaknesses: 1. The theory for line search and Polyak stepsize is only for the idealized version.
2. The insight that line search methods do not work particularly well is not exactly new, see for instance J Bolte, E Pauwels "Curiosities and counterexamples in smooth convex optimization"
3. The complexity of the theory and the discussion is such that it is very difficult to draw any insight, and it seems the theory wouldn't be extended by others because of how convoluted it is.
### Minor issues
Abbreviation "SGD" is used before (line 23) it is introduced (line 30)
"We fully expect" -> "We expect"
"worst case convergence guarantees" -> "worst-case convergence guarantees"
"strongly-convex" -> "strongly convex"
Technical Quality: 3
Clarity: 1
Questions for Authors: 1. What's the intuition for why $h$ exists and why would it be well-behaved? I can see a discussion in Appendix B, but it doesn't provide any high-level intuition.
2. Why do you say in Appendix B that $R(X)$ is "an expectation of a Gaussian vector"?
3. The authors wrote "We shall remove this condition by approximation in what follows", where is it done?
4. The authors say they need "mild assumptions on the learning rate", but why are they mild?
Confidence: 3
Soundness: 3
Presentation: 1
Contribution: 3
Limitations: The authors could do a better job when discussing the limitations of their work. There is no explicit limitations section and in the ckecklist, the authors state that they already did everything needed by clearly outlining the required assumptions. However, I think it'd benefit the paper if the authors made an explicit statement on how some aspects of the theory are too restrictive.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: The reviewer's comments required more space than 6000 characters to answer. *We will include an additional "Official Comment" with the answers to the reviewer's questions.*
**Significance of results \- Main contributions**
This work introduces a framework that goes beyond traditional analysis of stochastic algorithms to utilize the high-dimensionality in the problem. It allows for a finer analysis of stochastic learning rate algorithms (e.g., AdaGrad, line search, Adam, RMSprop, etc).
We also emphasize that one of the main contributions of this work is an explicit deterministic expression for the dynamics of stochastic adaptive learning rate algorithms (Thm 2.1). That is, we can predict the behavior of the loss value and learning rate (both stochastic) at any iteration without ever running the stochastic algorithm (see Fig 1). We then analyze these dynamics to gain insights into the evolution of the stochastic learning rate at any iteration. These dynamics exactly agree when the problem is “high-dimensional” (number of parameters is large) and often numerically reproduce the dynamics on real data (see Fig. 6 on CIFAR-5m dataset).
**Line search and Polyak only “idealized version”**
It is true that the algorithms presented are idealized versions. However, we emphasize that a theoretical understanding of the behaviors of these idealized (stochastic) algorithms is not known \[Note: the reference given is for a line search on a deterministic objective and less is known about the behavior of stochastic line search\]. Adding more practical versions, which are often more complex, only adds complications to the already unknown behaviors. As such, a natural starting point for developing a theory for the exact dynamics of stochastic learning rates is to understand the behavior of these idealized versions.
**Line search methods do not work well is not new**
We first politely disagree with the reviewer that line searches do not work in practice. For deterministic optimization problems, line search methods are widely used and are the default in many well known algorithms (e.g., see scipy documentation for BFGS (Armijo condition)). While we appreciate the reference that showed an example where line searches fail on a deterministic optimization problem, line searches do provably work for a large class of deterministic optimization, including the deterministic versions of the problems analyzed in this paper. The theory for deterministic line searches is well developed, but for stochastic optimization problems the theory is still in its infancy.
We do agree with the reviewer that on stochastic optimization problems, line search methods are not used in practice. In fact, one of the goals of this paper is to explain this\! We show why they perform badly due to anisotropic data; in contrast, a standard minimax-optimal analysis says they are as good as SGD (see e.g. \[50\]). Moreover, we precisely quantify how different the convergence rates of a stochastic line search method will be from a tuned fixed learning rate SGD algorithm.
The theoretical framework established in this paper (Theorem 2.1) provides the tools (albeit more complex than the “textbook analysis”) to do a finer analysis of stochastic line search methods.
**Complexity of the theory is difficult**
The standard tools in textbooks for theoretically analyzing stochastic algorithms often give too crude estimates on their behavior in practice. As an example, the standard analysis would say that you need to use a decreasing learning rate with SGD, but nobody does this in practice. The point of this paper is to provide another tool that allows for a finer analysis of these algorithms – traditional minimax optimality *isn’t capable of distinguishing these algorithms* – so something more complicated *is necessary*.
**Complexity of theory makes it hard to derive insights. Theory will not be extended by others.**
We politely disagree with the reviewer. The related work paragraph (Page 4 Lines 106-110) describes past works which use a high-dimensional framework for analyzing dynamics of stochastic algorithms. These past works usually assume isotropic Gaussian data ($N(0,I)$) and do not consider stochastic learning rates for their algorithm, but are part of a growing body of literature based on this style of analysis.
We know that we are doing something quite different than the vast majority of optimization researchers, but we think there is a need for better ways to distinguish performance of stochastic algorithms. For example, the more traditional analysis of line search in Vaswani et al. \[50\] suggests SGD \+ line-search has no performance cost (their Theorems 1 and 4). We quantify the performance cost, as one incurs an extra factor of condition-number. This is a statement 99% of optimization-researchers should be able to understand, even if the methods are not to their taste, and one *needs* a more complicated setup to see it\!
\[50\] Vaswani, Mishkin, Laradji, Schmidt, Gidel, Lacoste-Julien. Painless Stochastic Gradient: Interpolation, Line-Search, and Convergence Rates. 2019\.
**Limitation paragraph.** Please see the response to "All Reviewers" for the exact paragraph we will add to the main document.
---
Rebuttal Comment 1.1:
Title: (Response to questions)
Comment: We thank the reviewer for finding typos, which have now been fixed. We answer below the reviewer's additional questions.
**Intuition and existence of $h$ and why is $R(X)$ is “an expectation of a Gaussian vector”**
The intuition for $h$ comes from the fact that $a^TX$ and $a^TX^{\\star}$ are correlated Gaussians with covariance given by $\[X|X^{\\star}\]^T K \[X|X^{\\star}\]$. Here $[X|X^{\\star}]$ means one concatenates $X$ and $X^{\\star}$.
Since $a \\sim N(0,K)$, for any fixed vector $X$, then $y \= a^TX \\sim N(0, X^TKX)$. Both $y \= a^T X$ and $y^{\\star} \= a^TX^{\\star}$ are Gaussians, but they are correlated normal since they both depend on the same $a$. A simple computation shows that the correlation between $y$ and $y^{\\star}$ is given by $\[X |X^{\\star}\]^T K \[X | X^{\\star}\]$ where $\[X|X^{\\star}\]$ is the concatenated vector. This is to say that $(y, y^{\\star})$ is Gaussian and $(y, y^{\\star}) \\sim N(0, \[X|X^{\\star}\]^T K \[X|X^{\\star}\]$. (See Section 1.1).
Suppose $\epsilon =0$ as otherwise this will just add another integral. From Eq (1), using the density of multivariate Gaussian $(N(0, \[X|X^{\\star}\]^T K \[X|X^{\\star}\])$,
$$
R(X) \= \\int f(y,y^{\\star}, \\epsilon) e^{(y, y^{\\star})^T G (y, y^{\\star})} \\, dy dy^{\\star}
$$
where G is the 2 x 2 matrix, $\[X|X^{\\star}\]^T K \[X|X^{\\star}\]$. Integrating out $y$ and $y^{\\star}$,
$\\int f(y,y^{\\star}, \\epsilon) e^{(y, y^{\\star})^T G (y, y^{\\star})} \\, dy dy^{\\star}$ is a function of $\[X|X^{\\star}\]^T K \[X|X^{\\star}\]$.
This function is $h$. Moreover, this also explains why $R(X)$ is an “expectation of a Gaussian vector” as $(y, y^{\\star}) \= (a^TX, a^T X^{\\star})$ is distributed as a multivariate Gaussian with covariance $\[X|X^{\\star}\]^T K \[X|X^{\\star}\]$ and you are taking the expectation over $(y, y^{\\star}, \epsilon)$.
**“We shall remove this condition by approximation in what follows”**
The reviewer is absolutely right: we did not end up removing the condition, although there is a straightforward way to do so, and we will correct this. In short, one creates an $f\_\\epsilon$ which is an approximation to $f$ formed by convolving with an isotropic Gaussian of variance $\\epsilon$. This is $C^2$ and has bounded second derivatives (as $f$ was smooth). We then will take limits as $\\epsilon \\to 0$.
We note that Lemma B.1 was intended as discussion, and it is not used in the main theorem. Theorem B.1 generalizes to the more standard L-smooth f, at the cost of additional complexities in the formulation (discussed below the Lemma), and is intended to show why we didn’t simply use the more standard L-smooth f assumptions.
**“Mild assumptions on the learning rate” Why mild?**
The learning rate assumptions are mild since they encompass many adaptive learning rates used in practice; they do exclude learning rates which are non-concentrating. Among non-concentrating stepsizes, one can further consider those which bias the gradient and those which do not bias the gradient. Those which bias the gradient require a totally different theory (e.g. scaling the gradient to be norm-1 or things like gradient clipping). Those which do not bias the gradient would have a similar theory as those which we consider, but with an additional variance term in the ODEs (e.g. RMS-prop with a very short window of averaging, with respect to dimension). So we chose this case to keep the theory *as simple as possible* (which we know is aligned with the reviewer’s desires\!) | Summary: This paper studies Stochastic Gradient Descent (SGD) training with adaptive step sizes. The setting is a generalization of single-index models. There is a ground truth vector X^*, and the model must find a vector X such that a loss L(X) = E_{a,epsilon}[f(<X,a>, <X^*,a>, epsilon)] is minimized. Here epsilon is a random additive error, and a is distributed according to N(0,K) for a potentially anisotropic covariance K.
This paper shows that (1) in the high-dimensional regime, the training dynamics converge to a deterministic limit given by an ODE. It studies these dynamics in two significant cases of adaptive step-size algorithms: (2) exact-line-search (i.e., greedily decreasing the risk optimally at each step), and (3) AdaGrad-Norm. For exact-line search, it is shown that this can be very suboptimal if the data covariance is highly anisotropic. For AdaGrad-Norm, it is shown that if the data covariance has lower-bounded eigenvalues the optimal step size (within a constant factor) is reached. However, for harder problems with worse conditioning of the data covariance, AdaGrad-Norm can be overly pessimistic.
Strengths: This paper provides a fine-grained analysis of practical optimization algorithms, and also proves a general theorem on convergence to deterministic dynamics that I believe could be used to analyze dynamics for different adaptive-learning-rate algorithms in the future. There is also a strong match between theory and experiments. I believe it is an important step in the direction of understanding these practical optimization algorithms.
Weaknesses: In terms of techniques, Theorem 2.1 on convergence to an ODE is based in large part on modifying a previous analysis of [Collins-Woodfin ’23] to the case of adaptive step sizes, so the techniques are not completely novel.
The data is also assumed to be Gaussian, although anisotropy is allowed.
Technical Quality: 4
Clarity: 4
Questions for Authors: I am satisfied with the presentation in the paper, and do not have questions.
Confidence: 3
Soundness: 4
Presentation: 4
Contribution: 3
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewers for their comments. We will update the paper to emphasize the difference between our analysis and that of \[14\]. We state below the main differences between this work and \[14\]:
* First, the learning rate is by itself a stochastic process. In addition, since it carries all the history of the previous gradients, the iterates of SGD are no longer a Markovian process. We extend the proof of the previous paper to handle such situations by providing an additional concentration of the learning rate to its equivalent deterministic limit.
* Second, there are differences in the analysis of the Volterra equation on the least square problem. Having the learning rate as a nonlinear function of the loss makes the analysis much harder, and we had to introduce new techniques to study this new generalized form of the Convolutional Volterra equation (see Appendix D).
* Third, in this paper, we study in depth the effect of different covariance matrix structures. For the line search method, we analyze in detail the example of two distinct eigenvalues. For AdaGrad-Norm, we study different power law covariance scalings and zero eigenvalues, for which we had to utilize some nontrivial asymptotic methods.
The assumption of the data being Gaussian could be relaxed. Most of the concentration estimates we use to prove the deterministic equivalent are based on concentration inequalities, which are valid beyond Gaussian data. In addition, Figure 6 in the paper shows that we can predict multipass SGD dynamics on more realistic data (CIFAR-5m) which suggests that our theory extends beyond the Gaussian data and the streaming setting.
\[14\] Elizabeth Collins-Woodfin, Courtney Paquette, Elliot Paquette, and Inbar Seroussi. Hitting the high-dimensional notes: An ODE for SGD learning dynamics on GLMs and multi-index models. arXiv preprint arXiv:2308.08977, 2023\.
---
Rebuttal Comment 1.1:
Title: Response
Comment: Thanks, I will keep my score. | Rebuttal 1:
Rebuttal: We thank the referees for their time and constructive comments that significantly helped us improve our paper. The reviewers questions were deep and so *we will add additional information as an "Official Comment."*
**Paper structural changes.**
In our next version of the paper, we will implement the following reorganization changes to improve the clarity and flow:
* Condense the assumptions, as much as possible, in the introduction. However, we do value stating upfront the formal assumptions in the main paper, as we do not want to hide anything from the reader. This will include, for instance, moving the discussion of the Assumptions, to the Appendix. We will also move the ODEs for the least squares problem earlier in the paper as recommended by Reviewer A5Ho.
* Add Conclusion paragraph *(see below in "Official Comment").*
* Add Limitation paragraph *(see below in "Official Comment")*.
* Increase the font size in the figures *(see attached PDF)* and added references to the figures in the inline text.
**New developments from previous work \[14\].**
This work’s Theorem 2.1 is inspired by Theorem 1.1 from \[14\]. We note that Theorem 1.1 in \[14\] does not apply to adaptive (stochastic) learning rates. Theorem 1.1 only holds for learning rates schedules which are *deterministic* such as decaying learning rates $1/t$. The new results (Thm. 2.1) allow for *stochastic adaptive learning rates.* This is important since many of the widely used algorithms in practice would not apply to results in \[14\], but do apply to our new result Theorem 2.1. This allows us to understand why these algorithms are used more in practice over say deterministic decaying step sizes.
Concretely, Theorem 2.1 (with stochastic adaptive learning rates) requires substantial work and a new proof that shows the stochastic learning rates and the loss are simultaneously concentrating around a deterministic function.
We will add a discussion about the differences between Theorem 1.1 in \[14\] and the new result Theorem 2.1 to the paper.
\[14\] Elizabeth Collins-Woodfin, Courtney Paquette, Elliot Paquette, and Inbar Seroussi. Hitting the high-dimensional notes: An ODE for SGD learning dynamics on GLMs and multi-index models. arXiv preprint arXiv:2308.08977, 2023
**Other scalings and lower rank K.**
We have adopted a scaling convention where stepsize scales like $1/d$. As multiple reviewers noted, this may not be the only scaling in which one sees a nontrivial limit. An important non-asymptotic measurement of $K$ is $\\text{Tr}(K) / ||K||\_{op}$, sometimes called intrinsic dimension $\\text{Dim}(K)$. We believe that a version of our main Theorem 2.1 generalizes to a non-asymptotic setting where the intrinsic dimension (as opposed to the ambient dimension $d$) is large, and this affects the step-size scalings that should be used in SGD.
1. Our theorem is really about the case that $\\text{Dim}(K)=\\Theta(d)$. This is the case, e.g. when one has a dimension-independent condition number (a big part of our paper). Here the $(1/d)$ scaling is where one sees non-trivial contribution to the dynamics from both the gradient flow term and the stochastic term (SGD noise). If one were to choose a smaller scaling (e.g. $1/d^2$), the dynamics would devolve to gradient flow, whereas if one were to choose a more aggressive learning rate, the dynamics would devolve to pure stochastic noise (and the risk would generally tend to infinity). With the $1/d$ scaling one sees the effect of both gradient flow and stochastic noise in the dynamics. Put another way, we have chosen the largest learning rate scaling for which one has a nontrivial limit.
2. When the $\\text{Dim}(K) \\to \\infty$ but $\\text{Dim}(K) \= o(d)$, the scaling of $\\gamma$ should not be $1/d$. It should rather be that the scaling is $1/\\text{Dim}(K)$ – for quadratic losses, one can check the same story as above holds here: larger learning rates lead to pure-noise and smaller learning rates lead to gradient flow approximations. On the other hand, the ODEs that we derived *are still the same ODEs*; one just needs to rescale time and learning rate in the ODEs to produce the correct equations. The method in our paper would definitely extend to *some* intermediate growth rates of $\\text{Dim}(K)$, but in our mind, the better theorem would be a fully non-asymptotic theorem – one which shows that when $\\text{Dim}(K)$ is large, the ODEs and the SGD curves are close as a function of $\\text{Dim}(K)$. That’s a much bigger task, and it could clearly be a standalone future project.
3. When $\\text{Dim}(K)$ is bounded above (say in the case of classical source/capacity restrictions), the ODE approximation has to change substantially. The losses don’t concentrate, and the mean behavior of the ODEs follow a discrete difference equation. Now one can use tricks, like embedding the markov chain in continuous time so the mean risk follows an ODE. But even doing this, the ODEs have additional terms (which simplify in high dimensions\!), and so they are strictly more complicated to analyze. We expect that in some cases this is not hopeless (such as source/capacity type conditions), and the extra terms will not play a substantial role.
Both regimes 2 and 3 are interesting, and there is plenty of room for future work in both those directions (both with adaptive and non-adaptive stepsizes). We did not directly pursue them in this paper, because we needed to be able to show that it was possible to derive meaningful conclusions about the limit system of ODEs in the high-dimensional case. So we pursued the more modest goal of proving the ODE comparison in the maximal-dimension case.
Pdf: /pdf/07990b23a4fd86f51a114404d4d178029d3b08f1.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | Summary: The paper focuses on analytical analysis of dynamics of a class of optimization algorithms with adaptive learning rate applied to linear models with Gaussian data. The class of algorithms includes AdaGrad-Norm, RMSprop-Norm, Polyak stepsize and line search, but doesn't include, for example, Adam, classical RMSprop and AdaGrad. Also, authors consider $\frac{1}{d}$ scaling of learning rate with dimension $d$ of the problem.
First, the authors establish convergence of the optimization trajectory to a deterministic ODE in the limit $d\to\infty$. Then, they focus on analysing the resulting dynamics for quadratic loss functions and specific algorithms
- Plolyak stepsize. Obtaining learning rate.
- Exact line search. Obtaining learning rate for two cases of data covariance $K$: isotropic and with two distinct eigenvalues
- AdaGrad-Norm. $O(t^{-\frac{1}{2}})$ learning rate asymptotic for noisy observations. Also, learning rate and risk asymptotic for isotropic and power-law data covariance $K$.
The theoretical results are validated with numerical experiments on synthetic data.
Strengths: - The main result (convergence to deterministic continuous-time ODE) is valid for general loss functions. The precise characterization of the optimization trajectory, as opposed to upper/lower bounds, is typically harder to obtain for non-quadratic loss functions
- For AdaGrad Norm algorithm, the authors characterized the dynamics of risk and learning rate for several distinct types of covariance matrix spectrum.
However, both these points are achieved only partially (see *weaknesses* for details). Completing them would significantly improve the paper.
Weaknesses: The main drawback of the manuscript is its writing. The paper mostly focuses on the introduction and setting, leaving too little space for actual results and their discussion. Out of 9 pages:
- Five pages are reserved for introduction and setup.
- Two pages are occupied by Section **2**, which mainly introduces notations and demonstrates them on the quadratic problem example. Indeed, it contains Theorem **2.1**, but that one is quite short
- Only the last two pages contain the main results of the paper. This leaves very little space for discussing them. Some of the discussions are present in the (quite lengthy) *main contributions* part, but, for readability purposes, it would be nicer to have this discussion together with or after the results.
- Any kind of concluding section is absent
Regarding the figures, only figure **1** is referenced in the text while figures **2**,**3**, and **4** are not referenced. This adds an extra effort to interpret them and put into right context.
**New developments from previous work**. As mentioned by the authors, the current paper follows the technical framework developed in [14], extending it to adaptive algorithms. Then, it would be good if the manuscript mentioned more clearly which results/notions were already obtained in [14] and which are new additions of the current paper.
- In particular, Theorem **2.1** of the current paper seems to be very close to Theorem 1.1 of [14] which can also handle non-constant learning rates. If two theorems are indeed close, it would be interesting to discuss new aspects introduced by adaptive algorithms.
- Assumptions **1-6**, and the notations of sections **1.1**, **1.2**, **2** look very close to that of [14]. If that is the case, it would be interesting to discuss more specifically their differences. Also in that case, sections **1.1**, **1.2**, **2** could be compressed by partially moving them in appendix, leaving more space for the results and their discussion.
**Results for non-quadratic losses**. Although the framework, as introduced in section **2** is designed for general non-quadratic problems, the specific results are given only for quadratic problems in the main paper. Some results for non-quadratic problems are given in appendix sections **D.1**, **E**, **F.1**, **G.1** (mainly based on results of [14]). In such form of these results, it is hard to extract interesting conclusions, or determine the contributions compared to the previous literature.
Technical Quality: 3
Clarity: 1
Questions for Authors: **High dimensional vs. low dimensional problems**. Some covariance matrices $K$ satisfy assumption **1** but have $\frac{\mathrm{Tr} K^2}{d}\to0$ and therefore produce trivial ODE dynamics coinciding with full-batch Gradient Flow (as can be seen from eq. 11), for example
- rank 1 with non-trivial eigenvalue $\lambda=1$ (extreme case).
- with power-law spectrum $\lambda_i=i^{-\alpha}, \alpha>1$ (e.g. classical *capacity* and *source* conditions).
Indeed, for these two examples, the dynamics converge to full-batch Gradient Flow due to $\propto \frac{1}{d}$ scaling of learning rate in eq. 3, while learning rate independent of $d$ would be more natural.
- Could the techniques developed in this paper handle such problems?
- If no, is it possible to formulate a criterion on covariance matrix $K$ to distinguish non-trivial ODE dynamics from trivial ones (i.e. coinciding with full-batch gradient flow)? For general non-quadratic problems.
**Results for power-law spectrum** The last section provides variety of results in case of power-law distribution of eigenvalues (with exponent $\beta$), and target (with exponent $\delta$). In the non-vanishing learning rate phase $\beta+\delta<1$, how the obtained rate $\mathcal{R}(t)\sim t^{\beta+\delta-2}$ compares with other results for SGD under power-law spectrum (e.g. [1](https://arxiv.org/abs/2006.08212),[2](https://arxiv.org/abs/2102.03183),[3](https://arxiv.org/abs/2206.11124))?
**Learning rate phase transition**. For power-law spectrum, learning rate asymptotic exhibits transition at $\beta+\delta=1$, as demonstrated on fig. **4**. Interestingly, on this figure we can see that in both phases learning rate reduces similarly up until it reaches very small value ($\gamma\sim 10^{-4}$) at quite late optimization times ($t\sim10^{3}$). Can this dynamic transition (w.r.t. time $t$) be explained theoretically, for example, with some quantities characterizing transition scale with $d$?
Confidence: 3
Soundness: 3
Presentation: 1
Contribution: 2
Limitations: The paper surely doesn't have any negative societal impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We address below the questions of the reviewer.
1. **Isotropic covariance matrix $K$.** We emphasize that our results hold for *non-isotropic data covariance matrices $K$* (under mild assumption $||K||\_{op} \< C$, and the average eigenvalue of $K$ is bounded independent of $d$). We point to Reviewer ipsV who also noted the anisotropic assumption on $K$.
2. **Writing of the manuscript.** Please see some specific changes in the “All Reviewers” section of the response. We appreciate the reviewer’s suggestions.
3. **New developments from previous work \[14\].** See discussion to all reviewers.
4. **Beyond non-quadratic losses.** In Appendix D, we show that under a restricted secant inequality, similar conclusions to the quadratic case can be derived for AdaGrad-Norm. Now the results we get are essentially the same as one gets for quadratics. We weren’t so keen to generalize the whole paper to something like losses satisfying an RSI if, at the end of the day, we just get the same answers as for quadratics. Identifying interesting and important non-quadratics with different phenomenology for adaptive algorithms is an interesting direction of future research (e.g. nonconvex problems like retrieval or problems exhibiting implicit bias). Using Theorem 2.1, the ODEs are already in place, and so there is a framework for finer analysis of adaptive algorithms which was missing from the literature. We believe these methods can eventually be used to provide better or more informed adaptive stepsize methods.
5. **High-dimensional vs low-dimensional problem extensions.** Thanks for these points – they are very good. Please see the discussion in the rebuttal to all reviewers. We’ll remark that the $\\text{Tr}(K^2) \= o(d)$ condition you observed is implied by $\\text{Tr}(K) \= o(d)$, so long as we keep bounded operator norm of $K$, and so it falls into the regime where SGD and the ODEs both need to be scaled differently. We can add a discussion of this to the paper.
Part of the reason we wrote *this* paper is that we needed to demonstrate that it was actually possible to analyze these ODEs and derive meaningful conclusions about optimization algorithms. Especially, we needed to analyze these ODES in the presence of anisotropy, which can heavily degrade the value of an adaptive stepsize algorithm (e.g. line searches work great in the isotropic case – and are essentially optimal – but they are bad in the presence of anisotropy. In contrast, adagrad norm does much better, although still fails in some power law-type setups). So half of this paper is analyzing various algorithms and half is formulating a precise theorem which says where the ODE comes from.
We hope this paper could be a starting point for us (or, gladly, other researchers\!) to refine the method further, for example:
* Finding the optimal covariance conditions under which these ODEs hold and making a fully non-asymptotic comparison.
* Understanding the extra correction terms to the ODEs in non-high-dimensional setups, such as the classical source/capacity conditions. (The rank-1 problem you mentioned, in contrast, is not well-aligned with the goals of this type of analysis, which we think could be summarized as: how to give meaningful quantification of adaptive algorithms on anisotropic problems).
* Digging deep into other adaptive stepsize strategies to see if they perform better in anisotropic setups.
* Showing a version of Theorem 2.1 for non-Gaussian data (high-dimensionality also gives a path to showing universality of the ODEs).
It could be, for example, that Adagrad norm on classical capacity source setups could be analyzed in the way we did and have interesting conclusions.
6. **SGD power-law spectrum rate.** We thank the reviewer for the references. Under our set-up, constant stepsize SGD would be $\\mathcal{R}(t) \\asymp t^{\\beta \+\\delta \-2}$, and there is no transition at $\\beta+\\delta \=1$. We will add this to our paper. The references \[1\],\[2\],\[3\] suggested by the reviewer do consider similar assumptions on the spectrum and eigenvalues of the covariance matrix, but the papers referenced are not in the high-dimensional regime (diverging $\\text{Tr}(K)$). All \[1\],\[2\],\[3\] contain results which formally correspond to $\\beta \> 1$, so a quantitative comparison is not really possible (\[3\] discusses our regime here, but does not seem to have rates in this regime). Qualitatively, they all demonstrate similar results as seen here (formally similar to how the loss undergoes a phase transition as the ‘source’ parameter varies in source/capacity setups). In the next version of the paper, we will add clarification regarding the scaling with constant stepsize together with comparison and citations of the relevant works.
7. **Learning rate phase transition.** This is a great question and worthy of future research. While we did not keep track of the exact finite dimensional $d$ effects, one could in principle use our analysis to theoretically quantify when the asymptotics for the learning rate and the risk take over.
---
Rebuttal Comment 1.1:
Title: Rebuttal update
Comment: Dear authors, thank you for your response, including the general rebuttal. Based on the rebuttal, I am willing to increase the score to 5. Let me clarify the reasoning behind it, so that the authors and AC have a possibility to calibrate the score. The positive things are
- proposed changes in writing are good
- the discussion of relation with [14] adds clarity and transparency
- the comments regarding different scaling of learning rate with dimension $d$, as well as additional discussion of power-law rates, help to better identify the positioning and relation of the current paper with other approaches.
What prevents me from raising the score further (e.g. 7) is that the paper still explores just a little bit of the behaviors of adaptive algorithms. The theorem 2.1 is a solid foundation for studying adaptive algorithms. However, it really pays off with its subsequent application to different adaptive optimization scenarios. The applications in the current manuscript (sections 3 and 4) still feel as kind of short and not properly explored. The same goes to the application to non-quadratic problems - authors comment that they are roughly the same as quadratic problems, which does not sound satisfying. | null | null | null | null | null | null |
Disentangled Style Domain for Implicit $z$-Watermark Towards Copyright Protection | Accept (poster) | Summary: The current watermarks applied to AI-generated images rely on adding additional information, limiting their ability to detect unauthorized use of data. This paper introduces a new implicit watermarking scheme, which first utilizes the disentangled style domain to detect unauthorized dataset usage in text-to-image models., so as to achieve self-generalization and mutual exclusivity within the style domain anchored by protected units. In addition, this paper introduces the concept of watermarking distribution and establishes a verification mechanism for copyright ownership of hybrid or partial infringements. It is worth noting that this paper implements One-Sample-Verification for dataset copyright verification in AI mimic generation. This paper also designed experiments to verify the robustness, generalization and ablation of multiple data sets.
Strengths: - The problem exploited in this paper is important and needs to be addressed.
- The idea of disentangling the image into content and style seems effective.
Weaknesses: - This paper is really hard to follow. The method's modules are not explained very well, and the input, output and training details of each module are not clear.
- Too much mathematical description makes it difficult for readers to quickly understand the process and principle of the method, and the meaning of each mathematical symbol is not clearly described.
- The model structure diagram is less relevant to subsequent descriptions. The paper also lacks simple examples of the data used in the experiment.
-Only the watermark distribution concept is proposed, and the copyright ownership verification experiments of hybrid or partial infringements are lacking.
Technical Quality: 2
Clarity: 2
Questions for Authors: - The significant problem is the writing, too obscure.
- There is a problem with the icon marking in Figure 2, the marked color of central sample is inconsistent with that in the figure.
- There is an unclear sign in line 181. Should $s_k^+$ be $s_i^+$?
- Can you explain the meaning of each symbol of formula 7 in detail, and supplement the rationality and significance of the indicators proposed?
- In 5.2 Main result, can you give the value of the number of protected units (i.e., K) and how the 1000 images used were selected?
- It is observed that the avg acc is higher with the longer the watermark length in the ablation experiment. Can you give the details of the mapping from the contraction domain to the watermark in the extractor?
- The experiment is only compared with the digital watermarking method, can you add a comparison with other methods (such as backdoor-based)?
Confidence: 4
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: The authors addressed the limitations of this paper, which focuses only on the disentangled style domains of protected units, potentially making it difficult to resist attacks that modify deep features of style domains. Future research could incorporate specific adversarial samples.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer NU2V, thank you very much for your careful review of our paper and thoughtful comments. We hope the following responses can alleviate your concerns.
---
**Q1: The significant problem is the writing, this paper is tough to follow and not examples.**
**R1:** Thank you for your constructive suggestions! We will reorganize and improve it to make the expression clearer and more understandable. **Regarding training details**, we have provided **model details and experiment details** in the supplementary materials. Meanwhile, **in Section 3.2 of the supplementary materials**, we show simple examples of the suspicious data generated from suspicious models and APIs, such as DALL·E·3.
---
**Q2: Can you explain the meaning of each symbol of Formula 7 in detail, and supplement the rationality and significance of the indicators proposed?**
**R2:** Formula 7 is as follows:
$P_{z}(x|\phi \backsimeq \mathcal{D}) = \frac{q_{\phi_z}(z_{emb}|z)}{2^L \cdot K \cdot (c+\beta)^{K \times N^2 \times (K-1)}}$
- **On the left side of the equation:** $x$ denotes the suspicious sample, $\phi$ represents the parameters of the VAE , $\mathcal{D}$ denotes the protected dataset, $z$ signifies the identifier. And $p(\cdot)$ represents the probability distribution of the copyright of $x$ belonging to $\mathcal{D}$.
- **On the right side of the equation:** $z_{emb}$ denotes the embedding representation, and $q_{\phi_z}(z_{emb}|z)$ denotes the prior probability distribution. $L$ denotes the length of the watermark, $K$ denotes the number of protected units, $N$ denotes the number of generation of protected unit, $c$ denotes the marginal distance, and $\beta$ denotes a positive hyper-parameter.
Here, $/2^L$ denotes the probability of the watermark conforming to $L_{bit}$, $1/K$ denotes the probability that the sample to be detected belongs to $K$ datasets' class, and $1/(c+\beta)^{K\times N^2\times(K-1)}$ denotes the reciprocal of the distance between samples with different styles and contents. Their product represents the probability that the sample to be detected originates from the protected dataset. In hypothesis testing, a low-probability event is almost unlikely to occur in a single random trial, and the probability of such an event is used as the significance level $\alpha$ (i.e., $\alpha$ $ \leq P(\cdot))$. Therefore, in the process of copyright ownership detection, the event that the sample is detected as belonging to the protected dataset can be expressed as $H_0: D \leftarrow x$, with a confidence interval of $1 - \alpha\$, and can be expressed as $P(|X - \mathcal{D}| \leq c) = 1 - \alpha$. Thus, we have a very high confidence in ensuring the ownership.
---
**Q3: Can you give the value of the number of protected units (i.e., K) and how the 1000 images used were selected?**
**R3:** Thank you for your comments and we do understand your concerns.
1. In 5.2 Main result, we set the value of $K$ to 50.
2. In the process of selecting 1000 images of the protected unit, we first obtained the representation $z$ of each image through the style domain encoder. We randomly selected one as the $z_o$ anchor sample, and the others as $z_{go}$. Then, we ranked them based on their similarity and Euclidean distance and finally selected the images according to the ranking results.
**Q4: Can you give the details of the mapping from the contraction domain to the watermark in the extractor?**
**R4:** Thank you for your comments. The PyTorch code of the watermark extractor is as follows:
```
class w_decoder(nn.Module):
def __init__(self, inc, outc):
super(w_decoder, self).__init__()
self.Conv_ = nn.Sequential(
nn.Conv2d(4*inc, 4*inc, 3, 2, 1),
nn.BatchNorm2d(4*inc),
nn.GELU(),
nn.Conv2d(4*inc, 4*inc, 3, 2, 1),
nn.BatchNorm2d(4*inc),
nn.GELU(),)
self.F_fusion = nn.Sequential(
nn.BatchNorm1d(6*inc),
nn.GELU(),
nn.Linear(6*inc, inc),
nn.BatchNorm1d(inc),
nn.GELU())
self.F_reduce = nn.Sequential(
nn.Linear(4*inc, 2*inc),
nn.BatchNorm1d(2*inc))
self.out = nn.Sequential(
nn.BatchNorm1d(inc),
nn.GELU(),
nn.Linear(inc, outc))
self.fc_d = nn.Linear(inc, inc)
self.fc_f = nn.Linear(inc, inc)
self.Adapt = nn.AdaptiveAvgPool2d(1)
def forward(self, data, domain, z=None):
f = self.Conv_(domain)
f = self.Adapt(f).view(f.shape[0], f.shape[1])
f_reduce = torch.cat((self.F_reduce(f), f), dim=-1)
f_fusion = self.F_fusion(torch.add(f_reduce, z))
out = self.fc_d(data) + self.fc_f(f_fusion) + data + f_fusion
out = self.out(out)
return out
wm_logits = Z_Model.w_decoder(data, domain, z)
```
---
**Q5: Can you add a comparison with other methods (such as backdoor-based)?**
**R5:** To further alleviate your concerns, we compare ours and methods based on backdoor attacks.
- We employ the current SOTA (DIAGNOSIS[1]) for dataset protection through backdoor. The evaluation metrics utilized are True Positive (TP), True Negative (TN), and Attack Success Rate (ASR), as implemented by DIAGNOSIS.
- **Main result:** The experimental results are shown in the table below.
|Method|TP|TN|ASR(%)|Avg acc(%)|k@t@100%wd(%)|
|-----------|-----|-----|---------|-------------|-------------------|
|DIAGNOSIS|993|7|99.3|-|-|
|Ours|1000|0|100|99.72|98|
- **Post-tracking ownership:** It refers to the process of claiming copyright ownership before litigation when owners discover suspicious models or images without having embedded backdoors immediately.
|Method|TP|TN|ASR(%)|Avg acc(%)|k@t@100\%wd(%)|
|-----------|-----|-----|---------|-------------|-------------------|
|DIAGNOSIS|2|998|0.2|-|-|
|Ours|1000|0|100|99.69|94.7|
[1] DIAGNOSIS: Detecting Unauthorized Data Usages in Text-to-image Diffusion Models. ICLR 2024.
---
Rebuttal Comment 1.1:
Title: Comment
Comment: Thank you for your response. These details are important so the reader can understand your method clearly, I will raise my score. However, after reading other reviewer's comments, I also have the same concern: the efficiency of this proposed framework should also be analyzed.
---
Rebuttal 2:
Comment: Dear Reviewer **NU2V**, thank you once again for your valuable feedback on our paper and for helping us improve our work. Your decision to raise our score is a recognition of our efforts. Regarding your concerns about the efficiency of the proposed framework, we have addressed this in our response to Reviewer d18z by analyzing the pipeline of our method across _the registration, computation, and inference stages_, supported by detailed experimental data. We will further clarify the efficiency of this framework in the revised paper. Thank you again for your thorough review and for helping us improve our work! | Summary: The paper introduces an implicit watermarking scheme that leverages disentangled style domains to detect unauthorized dataset usage in text-to-image models. The proposed method aims to address limitations in traditional watermarking techniques by using implicit z-watermarks for dataset copyright verification, achieving better robustness against various attacks.
Strengths: The proposed framework is able to protect the dataset copyright. This is extremely important in an era of generative AI. Besides, rather than directly protecting the image, this work proposes to protect the styles. Such a thinking may bring some new insights into this area.
Weaknesses: 1. The writing of this paper has some problems. The description of its first section is totally in a chaos status.
2. In some specific situations, I agree that the styles are necessary to be protected. However, in most situations, copyright law typically protects original works of authorship. An individual style, which may consist of a particular technique or aesthetic, is not considered a tangible, original creation. Styles are more akin to ideas or methods, which are generally not protected under copyright law. If the styles are not protected by the laws, why do we need such a method mentioned in this submission?
3. Defining and enforcing copyrights for individual styles would be highly subjective and impractical. Styles evolve and are influenced by many sources, making it difficult to establish clear boundaries for what constitutes a protected style.
4. Granting copyright protection to individual styles could stifle creativity and innovation. Artists and creators often build upon existing styles and techniques to develop new works. Restricting the use of styles could hinder the creative process and limit artistic freedom. For example, if the Cubist style were copyrighted, it could prevent new artists from experimenting with and developing this style further, thereby limiting artistic progress.
Technical Quality: 2
Clarity: 2
Questions for Authors: I have shown my concerns in the weakness part, Please address my concerns there.
Confidence: 3
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: I am uncertain about how practical it is to implement this method on a large scale. The experiments provided in this paper only cover limited data.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer **d18z**, thank you very much for your careful review of our paper and thoughtful comments.
**Q1: Response on "Why We Need to Protect Individual Creators' Styles and Content Copyrights in the AI Era".**
**R1:** To further alleviate your concerns, we provide more explanations.
- First, **considering both style and content align better with judicial standards**. In copyright cases, judges assess ownership by comparing the style, brushstrokes, and content of the original and imitation works.
- Second, **there have been numerous high-profile legal cases involving unauthorized imitation of artists' styles by AGI.** Notable Sarah Andersen [1] and Getty Images [2] have filed lawsuits against Stability AI, DeviantArt, Midjourney, and OpenAI over copyright and trademark infringement. The AI-generation 'Rock, Paper, Scissors'[3] has violated multiple laws, including the Digital Millennium Copyright Act. Greg Rutkowski’s art style has been used by AI without authorization for profit over 3 million times [4]. _In the AIGC era, safeguarding and tracing the styles and content of personal works is crucial_.
- Third, **there are already some efforts dedicated to protecting the styles of artistic works,** such as **Glaze: _Protecting Artists from Style Mimicry by Text-to-Image Models_[5] (_Best Paper at USENIX Security 2023_).** Glaze's survey shows that 91% of 1,207 artists are concerned about AI using their works for training. **Artists expect AI mimicry to have a significant impact on the art community:** **97% of artists believe it will reduce job security for some artists; 88% think it will discourage new students from studying art; and 70% feel it will diminish creativity.** Many artists (**over 89%**) have already taken or plan to take action in response to AI mimicry. Additionally, **55%** think reducing their online presence will impact their careers, while **78%** of artists expect AI mimicry to affect job security, rising to **94%** for newer artists. The survey report indicates that without appropriate regulations, AI-style mimicry could undermine people’s motivation for creative freedom.
- Fourth, **we aim to establish a positive and healthy cycle for the development of art between generative AI and human creation,** where human creators should retain the rights to authorize and trace their creative styles and content. We hope our work will guide the regulated development of generative AI. _In the AI era, we believe this step is urgent for ensuring intellectual property rights for human creativity._
**Q2: Defining and enforcing copyrights for individual styles would be highly subjective and impractical. Styles evolve and are influenced by many sources, making it difficult to establish clear boundaries for what constitutes a protected style.**
**R2: From a computational perspective, the abstract high-dimensional features of the style domain have distinctiveness.** In this paper,**to ensure the exclusivity of the style domain**, we use identifier $z$ to maximally shift the contraction domain to the edge distribution of the style representation space. _Specifically, We decouple the style domain and perform dynamic contrastive learning to increase the similarity distance._ **The Style Domain is shifted into the contraction domain of the edge distribution by $z$; $z$ and the watermark to be verified are held by the defender.** The copyright boundary of the protected unit, which is the edge space distribution, can only be correctly tracked and yield the correct watermark when using $z$.
||TP|TN|Avg acc(%)|k@t@100%wd(%)|
|-------------------|-----|-----|-----------------|--------------------|
|$z$ Error|0|1000|52.15|0|
|$z$-watermarking|1000|0|99.87|97.9|
**Q3: Regarding the statement ‘Granting copyright protection to individual styles could hinder creativity and artistic progress by restricting the use and evolution of existing styles.’**
**R3:** We do understand your concerns. Next, we will provide a more detailed explanation below.
- **First,** **we should clarify our goal again: creators should have the right to authorize and trace their creative styles and content,** particularly in the context of unauthorized mimicry by generative AI. In the AI era, we believe this step may be urgent: ensuring intellectual property protection for human creativity.
- **Second, current instances of AI mimicry may severely hinder their motivation and damage their enthusiasm, turning high-quality works into someone else’s benefit according to Glaze's survey [5].** The survey of **1,207 artists** shows **91%** are worried about AI training on their works. Concerns include job security (**97%**), deterring new students (**88%**), and reduced creativity (**70%**). **89%** are taking action, with **53%** considering reducing their online presence. **77%** believe AI mimics their styles well, but unauthorized use remains a major concern. _Glaze collaborates with **art-centric social networks**, advocacy groups like **CAA (US) and EGAIR(EU)**, governments, and companies to protect IP and advocate for artists' style copyrights._
- **Third,** **our work continues to focus on the perspective of human creators, aiming to ensure the rights of original authors in the era of the generative AI explosion.** We aim to establish a healthy cycle for the development of art, where creators have the right to authorize and trace their creative styles and content.
**Q4: The writing of this paper has some problems.**
**R4:** Thank you for pointing out them. We will improve them to make the expression clearer.
[1] AI art lawsuits: Stability AI, DeviantArt, and Midjourney face litigation.
[2] Inc. Getty Images (US). v. stability ai, inc. Assigned To: Jennifer L. Hall.
[3] Copyright Protection: Exploring Originality and Ownership in a Digital Landscape.
[4] This artist is dominating AI-generated art.
[5] Glaze: Protecting Artists from Style Mimicry by Text-to-Image Models, USENIX Security 2023.
---
Rebuttal Comment 1.1:
Title: My feedback
Comment: Thanks for the rebuttal. I have some concerns when reading other people's comments. Reviewer z2Ei mentions this proposed framework has many complex modules. By checking this paper again, I also agree with this point. If this work is for those artists, such a complex may hinder its usage. If people are reluctant to use it, any effective frameworks may become useless. How can you handle this issue? Besides, based on the provided descriptions, the efficiency of this proposed framework should also be analyzed, if the target users are those artist mentioned by the authors.
---
Rebuttal 2:
Title: Thank you and further explanations to reviewer's feedback
Comment: Dear Reviewer **d18z**, please allow us to thank you again for reviewing our paper and for your valuable feedback. We understand your concerns and are providing additional explanations to address them.
---
First, our method is both simple and efficient for users: **they only need to register the _the identifier $z$, the watermark, and the protected data with a third-party regulatory body, as mentioned in Section 3.1 (Defender Capability) of our paper_** . [1] notes that with the further commercialization of AIGC, the standardized data flow should involve _data owners, model providers, and public regulatory agencies (trusted third parties)_. The third party will jointly hold the unique identifier $z$ and watermark with the user, ensuring a one-to-one correspondence and non-redundancy between $z$, the watermark, and the protected data. Data copyright tracing and ownership should be initiated by the user through litigation, with processes involving judicial authorization and third-party verification. Together, the identifier $z$, watermark, and protected data constitute a personal copyright entity, enabling effective and secure copyright tracing in judicial proceedings.
Second, **the _essential security, rigor, and accuracy_ of personal copyright verification in judicial procedures are effectively supported and demonstrated by the rigor and complexity of the algorithmic framework presented in our paper**. We have designed a comprehensive and well-rounded protection and verification mechanism with a focus on personal copyright security, and we provide extensive experimental results to validate the reliability of our method.
Third, **our framework is divided into three stages: registration, computation, and inference.** _In the registration stage_, data owners register their identifier $z$ and corresponding watermark with a third-party regulatory body. _In the computation stage_, the third party uses our algorithm to perform computations and store the results after receiving the registration list. _In the inference stage_, the framework performs copyright verification on suspicious samples and models. **We analyze the efficiency of the framework as follows**: **_On one hand_**, in terms of resource consumption, during the computation stage, using a single 3090 GPU, the average computation time per user is **1** minute, with proxy sample computations averaging **3-5** minutes and memory usage approximately **3**MB. In the inference stage, the average inference time for **1000** users ranges from **30 to 100 milliseconds** (i.e., **0.065** milliseconds per user). **_On the other hand_**, regarding copyright tracing accuracy, the ASR metric is close to **100%**, with an error rate controlled below **0.1%**, and the average watermark accuracy exceeds **99%**. Of 1000 suspicious AI mimic samples, about **97%** can be successfully verified and traced through judicial proceedings (as indicated by the t@k@100wd% metric mentioned in this paper).
Users such as artists only need to register the identifier $z$ and the corresponding watermark with the third party. The design and complexity of the algorithmic framework ensure the security, rigor, and accuracy of copyright protection. Overall, our framework demonstrates its practicality in judicial security validation, resource consumption, and efficiency.
[1] Building Intelligence Identification System via Large Language Model Watermarking: A Survey and Beyond.
---
Rebuttal Comment 2.1:
Title: Thanks
Comment: Thanks for the quick reply. I will make the final decisions based on the discussions with other reviewers.
---
Reply to Comment 2.1.1:
Title: Thanks and A Gentle Reminder of the Final Feedback
Comment: Dear Reviewer **d18z**,
Please allow us to thank you again for your valuable time and constructive comments. Your comments have been instrumental in helping us clarify the significance of our work and enhance its quality.
_As the reviewer-author discussion phase is nearing its end_, we would like to know whether our explanations and experiments have properly addressed your concerns. We are more than happy to answer any additional questions. Your feedback will be greatly appreciated.
Thank you again for your thorough review and for helping us improve our work! | Summary: This paper introduces an innovative implicit $z$-watermarking scheme using disentangled style domains to protect dataset copyrights in text-to-image models. It achieves structured delineation of copyright boundaries, self-generalization, mutual exclusivity, and effective verification for hybrid or partial infringements. The method demonstrates high robustness and reliability against various challenges, marking a significant advancement in protecting copyrighted content in AI-generated visual data.
Strengths: 1. This paper designs a novel implicit $z$-watermarking scheme via disentangled style and content domains to protect dataset copyrights. Meanwhile, instead of embedding invisible information into images, the proposed self-generalization module and mutual exclusivity module are used to explore the style boundaries.
2. This paper is very detailed and well-formulated, with precise wording, clear definitions, and easy to understand.
3. Extensive experiments demonstrate the SOTA performance of the proposed method, such as DCT-DWT-SVD, RivaGan, and SSL. Obviously, the proposed $z$-watermarking is hard to be removed by some illegal mimic models.
Weaknesses: 1. In table I, it is suggested to compare with some recent and SOTA watermarking methods, such as Trustmark and RoSteALS.
[1] TrustMark: Universal Watermarking for Arbitrary Resolution Images.
[2] Rosteals: Robust steganography using autoencoder latent space. In CVPRW 2023.
2. Can $z$-watermarking resist some watermark removal or attack methods, such as DDIM inversion or VAE. The authors could provide some results to improve the completeness of the experiment.
Technical Quality: 4
Clarity: 4
Questions for Authors: Please refer to the weakness.
Confidence: 5
Soundness: 4
Presentation: 4
Contribution: 4
Limitations: The author has clearly presented its limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer **jJoo**, thank you very much for your careful review of our paper and thoughtful comments. We hope the following responses can help clarify potential misunderstandings and alleviate your concerns.
---
**Q1:** In Table I, it is suggested to compare with some recent and SOTA watermarking methods, such as Trustmark and RoSteALS.
**R1:** Thank you for your constructive suggestions! **We have added Trustmark[1] and RoSteALS[2]** as comparison baselines. Here are more details and discussions. We set up 100 images with different watermarks. Trustmark has a length of 64 bits, RoSteALS has a length of 56 bits, and ours is 128 bits. We evaluated whether the generated images contained watermarks using TP (True Positive) and TN (True Negative). For watermark extraction, we used Avg acc (%) and k@t@100%wd as evaluation metrics. The experimental results indicate that previous watermarking methods are diluted or erased easily during the generation process, which is detrimental to the traceability and ownership of the samples.
| Method | TP | TN | Avg acc (%) | k@t@100%wd(%) |
|------------------|-----|-----|-------------|-------------------|
| Trustmark | 93 | 907 | 55.37 | 6.6 |
| RoSteALS | - | - | 66.50 | 7.9 |
| **Ours** | 1000| 0 | **99.83** | **97.7** |
[1] TrustMark: Universal Watermarking for Arbitrary Resolution Images.
[2] RoSteALS: Robust steganography using autoencoder latent space. In CVPRW 2023.
---
**Q2:** Can $z$-watermarking resist some watermark removal or attack methods, such as DDIM inversion or VAE? The authors could provide some results to improve the completeness of the experiment.
**R2:** Thank you for your constructive suggestions! We agree that understanding the impact of watermark removal is also important. In our paper's robustness experiments, we conducted experiments on Latent Attacks. To further demonstrate the superiority of our approach, we have included the following additional experiments. We hereby provide more details.
- First, **we use the watermark removal method [1]** to attack baseline watermarking schemes and ours. For attacks using variational autoencoders, we evaluate the pre-trained image compression models: Cheng2020 [2]. The compression factors are set to 3. For diffusion model attacks, we use stable diffusion 2.0 [3]. The number of noise steps is set to 60.
- Second, we chose Avg acc (average watermark accuracy), Detect Acc (percentage of images where decoded bits exceed the detection threshold 0.65), and k@t@100%wd as the evaluation metrics for watermark robustness. The result is as follows.
- Third, our method achieves an average accuracy of 97.93% and 95.81%, with a detection accuracy of 100% and k@t@100%wd of 91.5% and 87.2% under VAE and Diffusion attacks, respectively. In contrast, other methods like DCT-DWT-SVD, RivaGan, and SSL show significantly lower performance. From the results, our performance significantly surpasses other watermarking schemes after being subjected to watermark removal attacks [1].
| Method | Removal Attack Instance | Avg acc (%) | Detect Acc (%) | k@t@100%wd (%) |
|-----------|-------------------------|-------------|----------------|-------------------|
|**DCT-DWT-SVD**|VAE attack|50.17| 2.0|0.0|
|**DCT-DWT-SVD**|Diffusion attack| 54.41 | 2.8| 0.0|
|**RivaGan**|VAE attack|60.71|6.2| 0.0|
|**RivaGan**|Diffusion attack|58.23|1.8|0.0|
|**SSL**|VAE attack|62.92|15.6|0.0|
| **SSL**|Diffusion attack|63.21|16.3|0.0|
|**Ours**|VAE attack|**97.93**|**100** |**91.5**|
|**Ours**| Diffusion attack| 95.81|100|87.2|
[1] Zhao X, Zhang K, Su Z, et al. Invisible image watermarks are provably removable using generative ai[J]. arXiv preprint arXiv:2306.01953, 2023.
[2] Z. Cheng, H. Sun, M. Takeuchi, and J. Katto, “Learned image compression with discretized gaussian mixture likelihoods and attention modules,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020, pp. 7939–7948.
[3] R. Rombach, A. Blattmann, D. Lorenz, P. Esser, and B. Ommer, “High-resolution image synthesis with latent diffusion models,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 10 684–10 695.
---
Rebuttal Comment 1.1:
Comment: Thank you for your detailed response. The additional experiments and explanations have addressed most of my concerns. I believe this paper presents an interesting and practically valuable work, with performance surpassing similar digital watermarking methods, making it applicable for copyright protection in generative AI. Considering the superior performance and the novelty of this approach, I decide to raise my score. | Summary: While text-to-image models excel in generating high-quality images, they also raise issues of unauthorized dataset copyright protection. This paper proposes a novel implicit watermarking scheme that detects and protects dataset copyrights by disentangling the style domain to generate watermarks. The proposed method achieves One-Sample Verification, significantly improving existing copyright protection mechanisms.
Strengths: 1. The authors are the first to utilize the disentangled style domain to detect unauthorized dataset usage in text-to-image models, and they have effectively implemented a method called z-watermarking to enhance copyright protection.
2. The authors conducted comparative experiments with state-of-the-art protection methods and robust experiments under various conditions. They also performed comprehensive ablation studies, demonstrating the effectiveness and robustness of their method. The ablation studies verify the importance and effectiveness of each module proposed by the authors.
Weaknesses: 1. This paper involves many complex modules, and in Section 3 (Method), the authors use many symbols and subscripts. However, the annotations for these symbols and subscripts are not very clear. For instance, in line 144, $\mathcal{E}_z(z_x|(x, ϕ), z) = s$. It is not clear what the input to the style domain encoder is. If ‘|’ denotes a probabilistic condition, then the style domain encoder only has one input, but Figure 2 appears to show two inputs, which is confusing. For the numerous symbols and labels, it is recommended that the authors provide a unified introduction in each section.
2. The paper introduces the operational flow of the model in three parts. However, the authors seem to focus more on explaining each module rather than the connections between modules and the overall operational flow. As a result, after reading these parts, it remains difficult to grasp the overall process of the proposed method. The paper appears technically sound, but there is still significant room for improvement in introducing the technical flow.
Technical Quality: 3
Clarity: 2
Questions for Authors: 1. In line 150, “Moreover, we implement layer-wise guidance dropout by selectively zeroing out portions of s_{1:m} , thereby diminishing the decoder’s dependency on sub-vector correlations.” Could you clarify the criteria used for this selection? Additionally, how does this approach effectively reduce the decoder’s dependency on sub-vector correlations? Is there existing literature that supports this conclusion, or can you provide a more detailed explanation?
2. In the experimental section, line 216, the authors tested “17 artists (e.g., Van Gogh and Monet) and 10 AI artworks.” Given that copyright protection is an incremental task, as new artists or AI artworks are created, the number of protected entities will increase. This presents two potential challenges:
1. How does the proposed method handle the increasing number of protected entities? Does it require retraining with each addition, or is there a more cost-effective solution?
2. As the number of protected entities grows, will the styles of these entities influence each other, potentially reducing the model’s detection performance?
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: The authors have addressed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer **z2Ei**, thank you very much for your careful review of our paper and your thoughtful comments. We hope that the following responses will help clarify any potential misunderstandings and alleviate your concerns.
---
**Q1:** Regarding “Moreover, we implement layer-wise guidance dropout by selectively zeroing out portions of $s_{1:m}$, thereby diminishing the decoder’s dependency on sub-vector correlations.” Is there existing literature that supports this conclusion? Could you provide a more detailed explanation?
**R1:** Thank you for your comments and we do understand your concerns. To further alleviate your concerns, we provide more explanations.
- Firstly, **our goal is to achieve a bidirectional mapping between images and disentangled variables $s_{1:m}$.** By reducing the co-adaptations between Unet layers, it enhances the model's generalization ability, meaning that neurons are less likely to rely too much on each other, thereby achieving decoupling. The dropout guided by zeroing out disentangled variables during training essentially aims to encourage the model to obtain linearly independent solutions for $s_{1:m}$.
- Second, **the latest paper SODA[1]** (presented at _CVPR 2024_, a self-supervised diffusion model designed for representation learning) suggests that disentangled latent spaces can better represent the generated images. **In the conclusion of the reference[1]: To improve localization and reduce correlations among the sub-vectors, we present layer masking – a layer-wise generalization of classifier-free guidance [2].** The reference provides ample ablation experiments to validate it.
- Third, we have also thoroughly validated the correctness of this conclusion in our experiments. The results presented in Table 1, Table 2, and Figure 3 of our main experiments in the paper demonstrate the rationale behind this conclusion.
[1] Hudson D A, Zoran D, Malinowski M, et al. Soda: Bottleneck diffusion models for representation learning[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024: 23115-23127.
[2] Ho J, Salimans T. Classifier-free diffusion guidance[J]. arXiv preprint arXiv:2207.12598, 2022.
---
**Q2:** How does the proposed method handle the increasing number of protected entities? Does it require retraining with each addition, or is there a more cost-effective solution?
**R2:** We use the identifier $z$ with effectively infinite capacity to handle the increasing number of protected entities. This approach does not require retraining the style domain encoder and incurs only minimal additional cost. We will provide more explanations.
- **First, one of the advantages of the paper is to use the _identifier $z$_ to address the issue of the increasing number of protected entities.** In this paper, unrestricted $z$ can represent any text, image, video, or audio, which is encoded into $z_{\text{emb}}$ and injected into the style domain to ensure the boundary of the protected unit.
- **Second, $z$ signifies the identifier that maximally shifts the contraction domain to the edge distribution** of the style representation space. After decoupling the style domain and negative samples and performing dynamic contrastive learning to increase the distance in the similarity space, $z$ is further shifted to the boundary space by injecting $z$.
- **Third, we do not need to retrain the style domain encoder.** We only need to decouple the protected unit into the style domain, inject the corresponding identifier $z$ to shift it to the edge distribution, and store the mapping relationship in the watermark extractor (This only incurs a minimal cost).
---
**Q3:** As the number of protected entities grows, will the styles of these entities influence each other, potentially reducing the model’s detection performance?
**R3:** Thank you for your comments and we do understand your concerns. In response to Question 2, **the advantages of our paper for addressing this issue are to decouple the style domain, utilize dynamic contrastive learning, and use the unique and infinite identifier $z$.** To further alleviate your concerns, we provide more explanations.
- First, we decouple the latent variables into sub-vectors that control image generation, thereby extracting linearly independent combinations of the image's essential features. Then, we utilize dynamic contrastive learning to set **_sample anchors, edge samples, central samples, and negative samples_** for the protected units, encouraging the style domain of the protected units to occupy mutually exclusive regions in the high-dimensional space.
- Second, we propose the identifier $z$ to further enhance the reliability and security of the solution. $z$ represents **_the unique and critical identifier_** that maximally shifts the contraction domain to the edge distribution of the style representation space. Since $z$ is an arbitrary identifier (including any text, string, image, etc.), its capacity is effectively infinite, which is sufficient to differentiate the growing number of protected entities.
- Third, we have thoroughly validated the feasibility of our approach through main and ablation experiments. **As the number of protected entities increases, the distinctiveness of the model remains robust, ensuring that the styles of these entities do not influence each protected unit.**
|128bit| 0-20% | 20-40%|40-60%|60-80% | 80-90%|90-100% | TP|TN| Avg acc (%) | k@t@100%wd (%) |
|-----------|-------|--------|--------|--------|--------|---------|-----|-----|---------------|------------------|
|**$z$ Error**| 0 |151|555|291| 3| 0 | 0 |1000|52.15|0|
|**$z$-watermarking**|0| 0 |0 |1| 6| **993** | 1000|0| **99.87** | **97.9** |
---
**Q4:** The annotations for these symbols and subscripts are not clear, and there is still significant room for improvement in introducing the technical flow.
**R4:** Thank you for pointing out the shortcomings in our writing. We will make revisions.
---
Rebuttal 2:
Comment: Dear Reviewer **z2Ei**,
Thank you once again for your valuable time and constructive comments. We would like to kindly inform you that we have already addressed your concerns in our rebuttal.
As the reviewer-author discussion phase is nearing its end, we would like to know whether our explanations and experiments have properly addressed your concerns. We are more than happy to answer any additional questions. Your feedback will be greatly appreciated. | Rebuttal 1:
Rebuttal: ## Global Response
We would like to express our gratitude to all reviewers for their thorough reading and constructive feedback. Since all reviewers have expressed several main concerns, we will try to address these issues in the global response.
**Rethinking Personal Data Copyright Ownership in the Era of Generative AI.** Due to the explosive growth of generative AI, an increasing number of creators' works, including creative entities, brushstrokes, and styles, are being used for unauthorized profit. In Glaze's survey, based on responses from 1,207 artists, the vast majority hope for fair legislation to protect the unique artistic styles and content of their works. However, unfortunately, there are currently no feasible solutions, and this issue is highly challenging. Currently, generative AI is being maliciously used by some to easily learn, imitate, and plagiarize unauthorized human works for profit. This severely undermines creators' motivation, damages their creative enthusiasm, and turns high-quality works into others' benefits. This step may be urgent for ensuring intellectual property rights for human creativity in the AI era. Therefore, we aim to establish a positive and healthy cycle for the development of art between generative AI and human creation.
**$z$ identifier causes exclusivity.** In this paper, $z$ is designed to ensure the exclusivity and uniqueness of the protected unit distribution within large datasets. $z$ signifies the identifier that maximally shifts the contraction domain to the edge distribution of the style representation space, after decoupling the style domain and performing dynamic contrastive learning to increase the distance in the similarity space. Since $z$ is an arbitrary identifier (such as text, strings, images, etc.), its capacity is effectively infinite, which further enhances the reliability and security of the solution. In machine learning, there are inherent differences in the high-dimensional feature distributions of protected units. Our approach, which utilizes the $z$ identifier, decouples the style domain and employs dynamic contrastive learning, aims to shift this distribution. In the paper, we provide hypothesis testing (Section 4) and experimental data (Section 5.3) to demonstrate the feasibility of our proposed solution.
| | 0-20% | 20-40% | 40-60% | 60-80% | 80-90% | 90-100% | TP | TN | Avg acc (%) | k@t@100\%wd (%) |
|--------------------|-----------|------------|------------|------------|------------|-------------|--------|--------|----------------------------------|-----------------------------------|
| $z$ Error | 0 | 151 | 555 | 291 | 3 | 0 | 0 | 1000 | 52.15 | 0 |
| $z$-watermarking | 0 | 0 | 0 | 1 | 6 | **993** | 1000 | 0 | **99.87** | **97.9** |
**Regarding the baselines for watermarking and backdoor methods.** We compared digital watermarking methods such as DCT-DWT-SVD, RivaGan, SSL, Trustmark, and RoSteALS, as well as the backdoor-based method DIAGNOSIS. The experimental results indicate that previous watermarking methods are diluted or erased during the diffusion generation process, which is detrimental to the traceability and ownership of the sample. Additionally, we introduced watermark removal attacks (such as VAE and Diffusion Attack), where digital watermarking methods nearly fail, while our approach still demonstrates strong robustness.
---
### Comparison of Methods
| Method | TP | TN | ASR | Avg acc (%) | $k@t@100\%wd$ (%) |
|----------------|-----|-----|-----|-------------|-------------------|
| DCT-DWT-SVD | - | - | - | 57.76 | 0.1 |
| RivaGan | - | - | - | 61.34 | 0.1 |
| SSL | - | - | - | 64.39 | 0.1 |
| Trustmark | 93 | 907 | 9.3 | 55.37 | 6.6 |
| RoSteALS | - | - | - | 66.50 | 7.9 |
| DIAGNOSIS | 993 | 7 | 99.3| - | - |
| **Ours** | **1000** | 0 | **100** | **99.83** | **97.7** |
---
### Removal Attack Performance
| Method | Removal Attack Instance | Avg acc (%) | Detect Acc (%) | k@t@100 %wd (%) |
|----------------|-------------------------|-------------|----------------|-------------------|
| DCT-DWT-SVD | VAE attack | 50.17 | 2.0 | 0.0 |
| | Diffusion attack | 54.41 | 2.8 | 0.0 |
| RivaGan | VAE attack | 60.71 | 6.2 | 0.0 |
| | Diffusion attack | 58.23 | 1.8 | 0.0 |
| SSL | VAE attack | 62.92 | 15.6 | 0.0 |
| | Diffusion attack | 63.21 | 16.3 | 0.0 |
| **Ours** | VAE attack | **97.93** | **100** | **91.5** |
| | Diffusion attack | **95.81** | **100** | **87.2** | | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Real-Time Recurrent Learning using Trace Units in Reinforcement Learning | Accept (poster) | Summary: This paper investigated using complex-valued diagonal RNNs for online RL to provide a small modification (RTUs), and the authors found policy performs significantly better in online RL across various partially observable prediction and control settings.
Strengths: 1. The paper is well-written.
2. Method part has good theoretical basis and clear reasoning logic.
3. The experimental results verify the effectiveness of the algorithm in a large number of scenes.
Weaknesses: N/A
Technical Quality: 2
Clarity: 3
Questions for Authors: N/A
Confidence: 2
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: The article discusses the limitations of the method in detail, which I think is reasonable.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | null | Summary: This paper introduces Recurrent Trace Units (RTUs), a lightweight extension to linear recurrent units (LRUs) that are more suitable for online reinforcement learning. In a range of ablations and experiments the authors show that RTU trained with Real-Time Recurrent Learning (RTRL) performs on par or outperforms LRU trained with RTRL and GRUs trained with BPTT in online reinforcement learning.
Strengths: - the paper is sound
- the contribution is clear
- extensive ablations on toy experiments
- interesting analysis
Weaknesses: - minor contribution
- some tasks are not very complex (Mujoco-P and Mujoco-V do not require much memory).
- expensive in the multilayer setting, making it limited to simple tasks.
Technical Quality: 3
Clarity: 3
Questions for Authors: - in bptt the history is often needed to be truncated because of vanishing/exploding gradients. In theory, RTRL should also suffer from these problems, how are they mitigated here? Is it because of the stale gradients in practice?
- RTUs could be trained with BPTT. This would raise the question whether the increase in performance is exclusively in RTRL.
- It would be interesting to discuss the differnce between the RTRL approach with the approach in [1], where the authors train general stateful policies with stochastic gradient approximation instead of BPTT; especially because the experiments are related.
[1] Time-Efficient Reinforcement Learning with Stochastic Stateful Policies
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors made the main limitation of this method clear. The paper would benefit from answering the above questions.
**Some minor points**:
- line 118: where did the non-linearity f come from (compare left and right)?
- line 148: “Each element of $h_{t-1}$” should be $h_t$
- Why not keeping the $\bar{h}_t$ notation from 3.1 in 3.2?
- the paper would benefit from proofreading; it has a couple of grammatical mistakes like "A window of past observations does not scalable well to long 30 sequences [...]"
**Conclusion**
The paper is nicely structured, with ablations and analysis, and the claimed contribution is fulfilled. Addressing the points above can improve the paper, but the contribution remains small.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the detailed and valuable feedback. Here, we respond to the points the reviewer mentioned.
>some tasks are not very complex (Mujoco-P and Mujoco-V do not require much memory).
While Mujoco-P and Mujoco-V could be considered short-term memory tasks, Reacher POMDP and some of the POPGym environments are long-term memory tasks. Table 1 in Ni et al. (2023) provides an estimate of the memory length needed to solve some tasks, and both Reacher-POMDP and Autoencode from POPGym have long memory requirements. Other environments we tested, such as CountRecall, Repeat First, and Concentration, also need a long history to be solved effectively.
[1] Tianwei Ni, Michel Ma, Benjamin Eysenbach, and Pierre-Luc Bacon. When do transformers shine in rl? decoupling memory from credit assignment. In the Thirty-seventh Conference on Neural Information Processing Systems, 2023.
>expensive in the multilayer setting, making it limited to simple tasks.
It’s true that getting the exact gradient for the multi-layer case is expensive. A common solution to this problem is to stop the gradient between the recurrent layers’ hidden states [1][2], this approach has been shown to work in practice and can be used when scaling our RTUs for multi-layers. While this solution has been shown to work in practice, we think that a more principled approach is still needed.
[1] Kazuki Irie, Anand Gopalakrishnan, and Jürgen Schmidhuber. Exploring the promise and limits of real-time recurrent learning, 2023
[2]Nicolas Zucchet, Robert Meier, Simon Schug, Asier Mujika, and João Sacramento. Online learning of long range dependencies. In Advances in Neural Information Processing Systems, 2023.
>in bptt the history is often needed to be truncated because of vanishing/exploding gradients. In theory, RTRL should also suffer from these problems, how are they mitigated here? Is it because of the stale gradients in practice?
That’s correct, RTRL can still suffer from vanishing/exploding gradients. In our parametrization, we can prevent vanishing/exploding gradient by restricting the magnitude of the complex number to be $\in (0,1]$. We discuss this point first in Appendix D.1, showing that different parametrization requires different restrictions to avoid vanishing/exploding gradients and that we choose the cosine representations as it has the least restrictions. In Appendix C.1, we discuss several ways to enforce the restriction on the magnitude of complex numbers and provide an ablation over these different approaches.
>RTUs could be trained with BPTT. This would raise the question whether the increase in performance is exclusively in RTRL.
That’s true. RTUs can be trained with both BPTT and RTRL. We show an experiment in Appendix E.1 (Figure 11) comparing RTUs when trained with BPTT and RTRL. RTRL does play an important role in the improved performance of RTUs over other architectures; RTUs with BPTT will have the same issue of truncation length/computational complexity trade-off. However, the parametrization of RTUs was needed to get to a tractable RTRL.
>It would be interesting to discuss the differnce between the RTRL approach with the approach in [1], where the authors train general stateful policies with stochastic gradient approximation instead of BPTT; especially because the experiments are related.
Thank you for pointing out this paper, we will add it to our discussion on previous work. While the paper introduces a way of calculating an unbiased gradient estimation of BPTT, they note that their estimation has high variance since they introduce stochasticity to the policy's internal state. This is in contrast to our approach where there is no added stochasticity. Additionally, the proposed approach in [1] was not able to learn both the policy and the value function at the same time without providing additional privileged information to the value function. Our approach doesn’t have these limitations.
>line 118: where did the non-linearity f come from (compare left and right)?
That’s a typo. The f shouldn’t be there at this point of the derivation. Thank you for pointing it out; we will fix it.
> line 148: “Each element of ht−1” should be ht
That’s correct. Thank you for pointing it out.
---
Rebuttal Comment 1.1:
Comment: I appreciate the detailed response to my questions. The authors did clarify my concerns, which is why I will raise my score to weak accept.
---
Reply to Comment 1.1.1:
Comment: Thank you for the response and for updating the score. | Summary: This work proposes Recurrent Trace Unit (RTU) as a modified variant of the linear recurrent unit (LRU) which has gained some popularity recently as a linear complexity model.
RTU adopts cosine representations of LRU to manipulate two-coupled real-valued hidden states, and introduces non-linearity (while breaking the equivalence to dense linear RNN when non-linearity is introduced in the recurrence).
Like LRU, RTU allows for tractable real-time recurrent learning (RTRL) through diagonal recurrence.
The focus of this work is on evaluating RTU-RTRL for reinforcement learning (RL) in partially observable environments.
The main experiments are conducted on two partially observable variants of six MuJoCo environments for general evaluation; in addition, one MuJoCo environment (Reacher) and five POPGym environments are used as memory tasks. Positive results for RTU are consistently shown.
Strengths: * Evaluating variants of LRU using RTRL *for reinforcement learning* is novel and interesting.
* Beyond the focus on the RL applications, several interesting details about LRU are discussed while developing RTU (Sec 3.2 and 3.4), which should be of interest to people interested in LRU in general.
* Experiments are conducted on several relevant environments.
Overall, I'm supportive of acceptance, provided that the authors will address the main issues described below.
Weaknesses: **One major issue is the repeated claims about outperforming "Transformers" while no such experiment is provided, certainly not in the main text (not even in the appendix as far as I can tell).** See:
- Abstract Line 11, "RTUs significantly outperform GRUs and Transformers"
- Line 329: "We also included a GPT2-transformer baseline"
- Line 330: "We provide the results for GPT2 in Appendix G."
- Conclusion: Line 356: "performed better than the more computationally intensive GRUs and Transformers"
I could not find the Transformer/GPT2 results even in the indicated Appendix G, Figures 21 or 22 either.
Given all these emphases on this claim, I had expected to see such results in the main text, e.g., in the same style as Figure 4 and 6.
I initially considered putting 1 ("incorrect statements") to encourage the authors to fix this immediately, but in the end, I decided to put the score I would give disregarding any mentions to Transformers. Regarding Transformers, please also check the related comment in the "Questions" field below.
**There are also some clarity issues:**
* Clearer explanations to explain the gap between RTU and LRU are expected.
In Figure 1, some explanations are needed to help the readers understand why there is such a big gap between LRU and RTU when everything is equal except the architecture. Is there anything specific to RTRL? or do you observe similar trends when T-BPTT is used for both models?
Similarly, some explanations should be provided to explain why Online LTU largely underperform RTU models in Figures 4, 5, and 6.
The only related comment I could find is Line 209: *"We found small choices in our implementation for LRU did not always behave as expected, partially due to how auto-differentiation is implemented in packages for complex numbers"*
So is LRU's problem just an implementation issue? Please clarify.
* I find Figure 2 misleading as they mix model architectures and learning algorithms within the same comparison, i.e., they compare LRU-TBPTT with RTU-RTRL.
Instead, the comparison should be between LRU-TBPTT vs. RTU-TBPTT vs. GRU-TBPTT, and LRU-RTRL vs. RTU-RTRL separately. The statement *"LRU and GRU with T-TBTT is not competitive with RTUs"* (caption Figure 2) is true but does not allow us to conclude on the superiority of RTU over LRU since two different learning algorithms are used. I acknowledge that good results are sufficiently shown later in Figures 4, 5 and 6, but Figure 2 alone is not informative.
**Some of the experimental designs are not convincing:**
* (Related to the point above) I find the experimental setting of Sec 4.2/Figure 2 under "resources constraints" too artificial (therefore not particularly useful). Does the chosen "computational budget of 15000 FLOPs" (Line 237) correspond to something intuitive/useful? How does the memory requirement differ between RTU-RTRL and RTU-TBPTT for different values of T?
Technical Quality: 3
Clarity: 3
Questions for Authors: Most of the questions have already been asked above. These are comments and suggestions.
* Tractable RTRL with a diagonal recurrent matrix dates back to [R1] [R2] at least, which should be cited.
[R1] Gori et al. IJCNN 1989. BPS: A learning algorithm for capturing the dynamic nature of speech.
[R2] Mozer. Complex Systems 1989. A focused backpropagation algorithm for temporal pattern recognition.
* The authors' view on Transformers for POMDPs (the second paragraph of the introduction; starting at Line 20) is restricted as it lacks discussion about the "linear" variants of Transformers that are stateful and permit a "recursive" formula just like RNNs.
There are several prior works evaluating [R3] [R4] [R5] and discussing [R6] such models in the context of reinforcement learning.
[R3] Irie et al. NeurIPS 2021. Going Beyond Linear Transformers with Recurrent Fast Weight Programmers.
[R4] Irie et al. ICML 2022. A Modern Self-Referential Weight Matrix That Learns to Modify Itself.
[R5] Pramanik et al. arXiv 2023. Recurrent Linear Transformers.
[R6] Lu et al. ICML 2024. Rethinking Transformers in Solving POMDPs.
* In the equation just below Line 118 (sec 3.1), *"f("* is a typo at this stage (the non-linearity is not introduced yet).
* Regarding the non-linearity "f", for the tractable RTRL to hold, "f" has to be a *purely* element-wise function. For example, softmax or layernorm would not work there (as they introduce interactions between recurrent units); I know nobody would use softmax or layernorm as a recurrent non-linearity in practice, but it might make sense to point out that there is a condition on "f", in the strict mathematical sense.
* It is unfortunate that no further architectural advances have been integrated and evaluated. Based on [11], gating can be made compatible with RTRL, and a recent model that is closely related to LRU, "Mamba", also puts back gating to linear recurrence. So some gating could have been a natural extension too.
* The proposed RTU is not specific to RL. I'm wondering if the authors considered applying it to other supervised learning applications.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Comments are already provided above.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the detailed and valuable feedback. Here, we respond to the points the reviewer mentioned.
> One major issue is the repeated claims about outperforming "Transformers" while no such experiment is provided.
We do provide the results for GPT-2 in Appendix G. Figure 21 shows the GPT-2 performance on Mujoco-P tasks, and Figure 26 has the learning rate sweep results for GPT-2. In Figure 21, the GPT-2 line is far below all the baselines we have, so it’s easy to miss. We changed it to a brighter color now, so it’s obvious. Please check the uploaded PDF for the updated figures.
The reason for not including GPT-2 in the main text is that the number of parameters in GPT-2 is way more than any of the architectures we have, so the comparison is a bit unfair. We will modify the abstract so as to not put a lot of emphasis on this comparison since it’s not a core result of the paper.
>In Figure 1, some explanations are needed to help the readers understand why there is such a big gap between LRU and RTU
Our goal from Figure 1 was twofold:
1. Understand the utility of RTU parametrization in contrast to LRU.
2. Understand the effect of different design choices, such as non-linearity and using a linear projection layer.
By controlling for other factors, we can ascertain that the gap in performance is due to the difference in parametrization choices, i.e., using explicit complex numbers in LRU versus using rotation matrices in RTUs. Using rotation matrices avoids the complications around taking the real part of the hidden states; we show in Appendix D.2 that taking the real part of the hidden states could result in a biased gradient estimate.
For Figure 1, we chose to use RTRL as using T-BPTT would complicate the comparison. Bad performance could be attributed to the truncation length rather than the architecture itself, while with RTRL, there is no confounding factor other than the parametrization.
>Similarly, some explanations should be provided to explain why Online LTU largely underperform RTU models
When comparing RTU with Online LRU, the core difference is RTU’s parametrization of complex numbers as rotation matrices. As we noted above, using rotation matrices avoids the complications around taking the real part of the output. We also noticed that using complex numbers combined with auto-diff libraries resulted in unexpected results when taking the gradients. This appears to be an issue related to the use of complex numbers in general, not specific to LRU. We pointed to a similar discussion in the footnote on page 5. We also emailed Online LRU authors to verify that our implementation is correct, and we used their suggested implementation of Online LRU in our experiments.
>I find Figure 2 misleading as they mix model architectures and learning algorithms within the same comparison.
The goal of Figure 2 was to highlight the trade-off between computational resources and performance when using T-BPTT versus RTRL. So, we could compare RTU-RTRL and RTU-TPBTT. However, we believe the reader would be more interested in seeing the performance of LRU-TBPTT and GRU-TBPTT to see how much benefit we get on top of these more widely used approaches when moving to RTU-RTRL. The plot becomes unreadable with too many choices. But your point is a good one, and we do provide a comparison of RTU-BPTT with LRU BPTT in Appendix E, Figure 11. As expected, when using T-BPTT with RTU, the truncation length affects the performance of RTU as well.
>Does the chosen "computational budget of 15000 FLOPs" (Line 237) correspond to something intuitive/useful?
We wanted to start with a computational budget under which all architectures can learn and produce relatively good performance so the comparison is meaningful. We looked at the sizes of the GRU architectures used in [1], where the animal learning experiments were introduced, and chose similar hidden dimensions to start with. That’s how we came to the choice of 15k flops specifically.
[1] Banafsheh Rafiee, Zaheer Abbas, Sina Ghiassian, Raksha Kumaraswamy, Richard S Sutton, Elliot A Ludvig, and Adam White. From eye-blinks to state construction: Diagnostic benchmarks for online representation learning. Adaptive Behavior, 2020.
>diagonal recurrent matrix dates back to [R1] [R2]
Thanks for pointing out these references. We will cite them in the introduction.
> The authors' view on Transformers is restricted
Thanks for pointing out those relevant papers. We will modify the discussion on transformers to include them. While the approaches in [R3] [R4][R5] are all interesting and relevant, they face the same challenges as T-BPTT-based approaches in terms of the trade-off between sequence length, computational complexity, and performance.
>Regarding the non-linearity "f", for the tractable RTRL to hold, "f" has to be a purely element-wise function.
We agree that the use of layer norm as an activation function is not conventional. The discussion we provided in Appendix A.2 was to mathematically state what conditions we need on f for the equivalence to hold. However, as we point out in the main body, we think of these theoretical results as negative results; as you pointed out, those functions are not element-wise and not conventional activation functions.
> Based on [11], gating can be made compatible with RTRL. So some gating could have been a natural extension too.
Thank you for the suggestion. We agree that gating is a natural extension of this work.
>The proposed RTU is not specific to RL.
We agree that RTUs are not specific to RL. However, we think RTRL approaches are more needed in the context of online RL than supervised learning. As in supervised learning, offline datasets are usually available, and having access to a long history is not as hard as in online RL. There are areas of online supervised learning where RTRL approaches could be useful, which could be explored in future work.
---
Rebuttal Comment 1.1:
Title: Thank you for the response
Comment: I thank the authors for their response.
> We do provide the results for GPT-2 in Appendix G. Figure 21
Thank you for the pointer to Figure 21 and 26, and the updated figures. The authors' comment regarding the "bad" colors in the old figure is accurate; the new colors help.
But I also do confirm that it does not make sense to emphasize on Transformer-related claims that are entirely based on results that are hidden in Appendix G, Figure 21 and 26 (and for other reasons I explain below).
> The reason for not including GPT-2 in the main text is that the number of parameters in GPT-2 is way more than any of the architectures we have, so the comparison is a bit unfair.
It is unreasonable to complain about the model's parameter count, as it is the deliberate choice made by the authors.
On the contrary, the current comparison seems unfair to the Transformers.
GPT-2 is a language model instantiation of Transformers.
To properly evaluate Transformers in specific RL tasks, their hyper-parameters have to be adjusted accordingly. In particular for POMDPs, a discussion on the context length is a minimum requirement in a proper evaluation, since if their context length is shorter than the memory span required for the task, they will obviously fail (we won't even need to run experiments to figure that out).
In this sense, I find all the current “GPT-2” related results reported in this paper insignificant, so are the claims regarding the "superiority" over "Transformers". I understood that this choice of Transformer architecture/GPT-2 was based on a prior work, but I do not see any good reason to follow such a choice which does not make sense.
I will maintain my currently score on the condition that the authors promise to remove all these claims from the main text, as I initially stated. Alternatively, the authors should conduct a new set of proper experiments by tuning the Transformer hyper-parameters/architecture.
What follows are minor comments:
> Our goal from Figure 1 was twofold: Understand the utility of RTU parametrization in contrast to LRU.
This was actually my original critique: Figure 1 does not fulfill this goal. Figure 1 currently *shows* the utility of RTU but fails at helping us "understand" why it is better than LRU. It will make sense to add a sentence toward the end of Sec 4.1 to explain this gap and guide readers to take a look at Sec 3.4 and Appendix D.2.
> For Figure 1, we chose to use RTRL as using T-BPTT would complicate the comparison. Bad performance could be attributed to the truncation length rather than the architecture itself,
I do not agree with this argument. I asked whether the gap between RTU and LRU would remain the same when T-BPTT was used. If we train both models with T-BPTT, the condition is the same for the two models, so the author’s argument does not hold. That said, I had also overlooked some details in Sec 3.4 and Appendix D.2, so I no longer find this experiments crucial.
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer for the detailed response and feedback.
We agree that the GPT-2 results are not core to the paper, and we will modify the main text not to emphasize those results. We also wanted to point out that we tuned the context length and the learning rate for GPT-2. In Figure 26 (or Figure 2 in the uploaded pdf), we show two variants of GPT-2, with context lengths of 1024 and 64. Both of these variants should have enough context length for the Mujoco-p benchmark. Nevertheless, we agree with the reviewer, and we will change the main text accordingly.
> It will make sense to add a sentence toward the end of Sec 4.1 to explain this gap and guide readers to take a look at Sec 3.4 and Appendix D.2.
Thank you for the suggestion. We will add a couple of sentences to clarify this part. | Summary: This paper introduces a novel complex-valued, diagonally connected recurrent network model based on LRUs called Recurrent Trace Units. Due to the diagonal connectivity, the model can be trained using real-time recurrent learning in linear time. The authors further propose a version of PPO that uses stale gradients computed by RTRL during interaction. The approach is evaluated on a simple online-prediciton task and a range of POMDP RL tasks.
Strengths: The paper is well written overall. All the necessary background information is summarized in a clear fashion. The main idea is laid out concisely. The experiments used for evaluation are adequate and results are displayed in a visually appealing way. Finally, the direction of research is undoubtedly very significant.
Weaknesses: A thorough ablation study of the proposed architecture is missing, i.e. taking discrete steps from LRUs towards RTUs and comparing all of them in one plot, including BPTT.
The work is not overly original, but rather an unavoidable next step after the success of LRUs.
The claim that taking the Real-part of the latent state leads to a biased gradient is not really convincing, all I see is an expression that is harder to compute.
There are many typos and misplaced words.
Somehow, the Appendix was more interesting that the actual paper.
Minor issues:
- L 29: "... does not scalable well ..."
- L 32: "... is well suited for update the state online ..."
- L 48: The sentence starting on this line looks like it needs a citation, please add something like "as we will show in section..."
- L 143, missing word: diagonal "matrix"
- L 151: "... choices made in LRUs, that they showed ..."
- L 291: The sentence starting on this line is confusing.
Technical Quality: 3
Clarity: 3
Questions for Authors: - Can you comment on the claim of a biased gradient? Surely, information is lost by discarding the real part, but the gradient should not be *biased*?
- One major advantage of LRUs is the possibility to employ parallel scans reducing the complexity to sublinear in the number of units. Is this advantage thwarted by adding a non-linearity?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors comment on the limitation regarding multi-layer architectures. Societal impacts of the work are not considerably different than those of Artificial Intelligence in general ...
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the detailed and valuable feedback. Here, we respond to the points the reviewer mentioned.
>A thorough ablation study of the proposed architecture is missing, i.e. taking discrete steps from LRUs towards RTUs and comparing all of them in one plot, including BPTT.
Thank you for raising this point. Currently, in Figure 1, we provide an ablation on several architectural choices and test them one at a time while fixing everything else. This means that the only difference between RTU and LRU lines in Figure 1 is the RTU parametrization, which is the main difference between LRU and RTU.
In the left plot of Figure 1 (titled: linear recurrence), we are testing the minimal architectural choice that could be made, a linear recurrence. Hence, this plot is only testing the utility of the RTU parametrization over LRU’s with the absence of any other factor. We then move to test different architectural choices that could be made for either of them, such as adding non-linearity or a projection layer.
For Figure 1, we chose to use RTRL as the learning algorithm rather than T-BPTT. We made this choice to prevent any other confounding factor (such as truncation length in T-BPTT) from affecting the performance of either architecture. We still tested RTUs with T-BPTT in the appendix, Figure 11, but as expected with T-BPTT, the performance of RTUs would reduce due to truncated gradients.
>Somehow, the Appendix was more interesting that the actual paper.
We are glad that you found the appendix interesting. We tried to extensively answer any questions that might arise about our approach, but we still didn’t want to overwhelm the reader with a lot of mathematical details in the main body and rather focus on the core ideas, so we chose to delegate most of the mathematical work to the appendix.
>Can you comment on the claim of a biased gradient? Surely, information is lost by discarding the real part, but the gradient should not be biased?
The discarded information from taking the real part also results in discarded gradient components. This makes the gradient estimation different from the true gradient and biased towards the information coming from the real components. We provide a simple derivation of that in Appendix D.2. It is worth noting, though, that the biased gradient issue happens when the recurrent part of LRU is not followed by a linear projection. Hence, for LRU to have an unbiased gradient estimate, it needs to always have a linear projection layer afterward, which is restrictive.
Additionally, while the original LRU paper proposes always having a linear projection after LRU, our empirical evaluations in section 4.1 show that this linearity restriction actually harms LRU's performance.
>One major advantage of LRUs is the possibility to employ parallel scans reducing the complexity to sublinear in the number of units. Is this advantage thwarted by adding a non-linearity?
Currently, Linear RTUs in Eq.2, where we apply non-linearity afterward, not on the recurrence, can still benefit from parallel scans, but the Non-Linear RTUs cannot since the recurrence part is non-linear. However, there are some recent works on parallelizing non-linear recurrence, which could be explored with Non-Linear RTUs as well [1][2].
[1] Lim, Y. H., Zhu, Q., Selfridge, J., & Kasim, M. F. Parallelizing non-linear sequential models over the sequence length. In The Twelfth International Conference on Learning Representations.
[2] Gonzalez, X., Warrington, A., Smith, J. T., & Linderman, S. W. (2024). Towards Scalable and Stable Parallelization of Nonlinear RNNs. arXiv preprint arXiv:2407.19115.
---
Rebuttal Comment 1.1:
Comment: Thank you for the response and for pointing to some literature on parallelizing non-linear RNNs.
> We still tested RTUs with T-BPTT in the appendix, Figure 11, but as expected with T-BPTT, the performance of RTUs would reduce due to truncated gradients.
I feel like showing that the model performance increases significantly when trained using RTRL over when trained with BPTT is the main contribution of this paper. Paired with the timing comparison of RTRL and TBPTT over different truncation horizons, this would make a great point and I lament that it did not make it into the main part of the manuscript.
> This makes the gradient estimation different from the true gradient and biased towards the information coming from the real components.
I am still not convinzed. When only using the real part I expect my gradients to be _biased_ towards the real part because this is the only quantity I am considering after all. Whereas, when using both the real and the imaginary part, I also expect the gradients to contain some portion corresponding to the imaginary part. Comparing the two equations in Appendix D2 seems to be misguided.
Nonetheless, I still feel comfortable with keeping my current rating.
---
Reply to Comment 1.1.1:
Comment: Thank you for the response and feedback.
> When only using the real part I expect my gradients to be biased towards the real part because this is the only quantity I am considering after all.
That is correct. We were unclear in our initial response, and we meant that though a reader *might* think we can just take the gradient of the real part, this would not be the same as the actual gradient, and this is what we meant by bias. We realize this word is really not the right word, and we should simply say it would be incorrect to do that to get the original behavior, as taking the real part discards some information. | Rebuttal 1:
Rebuttal: We would like to thank all of the reviewers for their valuable feedback on the paper. We’ve carefully considered each concern and suggestion and provided detailed responses.
While the reviewers had multiple concerns, there was no major common issue. To further address some of the reviewers' concerns regarding clarity, we added an updated version of Figures 21 and 26, changing the color of the GPT-2 baseline to be more obvious.
Pdf: /pdf/160203e01d5c5c24ea10d4b81c902468b81fe254.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | Summary: In online reinforcement learning, Recurrent Neural Networks (RNN) are still better than transformers, but there are still problems that need to be addressed. In this paper, the authors propose a modified form of the recurrence equation used for Linear Recurrent Units (LRU) – a type of RNN – called a Recurrent Trace Unit (RTU). The main differences RTU introduces compared to LRU is that it uses a diagonal matrix in place of a full matrix, thus reducing computations and it uses complex valued numbers in the diagonal. Note that multiplications with complex values can be represented as multiplying by a 2x2 matrix with real valued numbers. Additionally, RTUs are a more generalized form of LRUs since they allow nonlinearity in the recurrence equation, unlike LRUs which are strictly linear. This leads to improved accuracy. The authors first explain their recurrent formulation in detail and justify their design choices.
In the second half of the paper the authors empirically perform multiple comprehensive tests that show that RTUs outperform other architectures (GRUs, LRUs, and Transformers [note: Transformer results included in supplementary] ) in online reinforcement learning tasks, providing better performance with less computation. These tests include ablation, scaling of learning under various constraints, comparison of two different training methods – Real-Time Recurrent Learning (RTRL) (i.e. incremental updates) and Truncated Backpropagation Through Time (T-BPTT) (i.e. batched updates), and analysis of how well RTUs ‘remember’ information compared to other architectures.
Strengths: • The paper addresses an important problem in online reinforcement learning and proposes a computationally more efficient solution, RTUs, which outperform other architectures in various online reinforcement settings
• Has comprehensive tests covering and analyzing a variety of cases, which empirically show the improvements from the proposed architecture, RTU.
• Provides detailed explanations and derivations of the RTUs formulas, making it easier for readers to understand and replicate it
• Results are explained well using figures
• Supplementary section goes into detail on many areas which are not fully explained in the main text and answers many questions that may arise.
Weaknesses: • Most of the comprehensive tests compare the proposed RTU only with GRU and LRU architectures (there are a few transformer (GPT) tests in the supplementary). It would be better if the author uses more architectures or explains why these other two architectures are sufficient for a comparison
• Explaining the importance and benefits of addressing problems with online reinforcement learning in partially observable environments would strengthen the motivation.
• Some terminology not completely clear, such as difference between “Reinforcement learning” and “Online reinforcement learning”
Technical Quality: 3
Clarity: 3
Questions for Authors: However, is comparing with GRU and LRU enough for online reinforcement learning tasks? Or, would testing different architectures or explaining why the chosen architectures are sufficient would be better?
Confidence: 2
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Paper is very comprehensive; it motivates and both mathematically (in the supplementary) and empirically shows the improvement of their proposed architecture very well. One downside is that the paper has a lot of mathematical formulations that may be difficult to follow for someone not well versed in this topic. However, these formulations are necessary and each step of the formulations is explained in detail, making it easier to follow.
The author performs many types of tests comparing the proposed architecture with GRU and LRU, including a few transformer tests in the supplementary. However, is comparing with GRU and LRU enough for online reinforcement learning tasks? Testing different architectures or briefly explaining why the chosen architectures are sufficient would be better.
In the motivation, the author motivates why RNNs are still state of the art in reinforcement learning in partially observable environments and why transformers are not suitable for this task. However, the reason why reinforcement learning in partially observable environments is important is not explained. Having a few sentences on the importance addressing of this topic would strengthen the motivation.
Lastly, in the introduction and abstract, the author mentions both reinforcement learning and online reinforcement learning, but does not explain the differences. The authors should clarify whether these are the same concepts or are distinct.
Grammar:
• Line 188-190: “RTUs can be 189 seen as a small generalization of LRUs, moving away from strict nonlinearity” shouldn’t this be linearity, since LRU use linear operations?
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the detailed and valuable feedback. Here, we respond to the points the reviewer mentioned.
> Explaining the importance and benefits of addressing problems with online reinforcement learning in partially observable environments would strengthen the motivation.
We motivated the importance of learning under partial observability in the first paragraph of the introduction when talking about how in the real world, agents perceive their environments through imperfect (i.e., partial) sensory observations. Hence, for agents to be deployed and learn online in the real world, they need the ability to predict and control under partial observability. We agree that the motivation for learning under partial observability is core to our work, and we will expand our current motivation in the introduction to emphasize the importance of learning under partial observability.
>Some terminology not completely clear, such as difference between “Reinforcement learning” and “Online reinforcement learning”
Thanks for pointing that out. When we say "online reinforcement learning" we mean that the agent learns while interacting with the environment. This contrasts with offline reinforcement learning or reinforcement learning with access to a simulator. We will add a couple of sentences to the intro to make this clear.
>is comparing with GRU and LRU enough for online reinforcement learning tasks? Or, would testing different architectures or explaining why the chosen architectures are sufficient would be better?
We chose GRU as our main baseline since it’s widely used for reinforcement learning problems with partial observability and has been shown to outperform most of the other memory components in several domains, which makes it a strong baseline to compare against [1][2][3][4]. Additionally, the authors of the original POPGym paper that we use in section 6 have extensively tested various recurrent and transformer architectures on the benchmark, and GRU was always the best-performing architecture [1]. Since we are reusing their benchmark and similar other tasks, it is natural to compare against GRU as they have already done extensive testing with the most known architectures.
While LRU has not been explored in reinforcement learning domains, we added it to our experiments because it is similar to our proposed architecture.
[1] Steven Morad, Ryan Kortvelesy, Matteo Bettini, Stephan Liwicki, and Amanda Prorok. POPGym:Benchmarking partially observable reinforcement learning. In International Conference on Learning Representations, 2023
[2] Tianwei Ni, Benjamin Eysenbach, and Ruslan Salakhutdinov. Recurrent model-free rl can be a strong baseline for many pomdps. In International Conference on Machine Learning, 2022.
[3] Danijar Hafner, Jurgis Pasukonis, Jimmy Ba, and Timothy Lillicrap. Mastering diverse domains through
world models. arXiv preprint arXiv:2301.04104, 2023.
[4] Pleines, M., Pallasch, M., Zimmer, F., & Preuss, M. (2023). Memory gym: Partially observable challenges to memory-based agents. In The eleventh international conference on learning representations.
> One downside is that the paper has a lot of mathematical formulations that may be difficult to follow for someone not well versed in this topic. However, these formulations are necessary and each step of the formulations is explained in detail, making it easier to follow.
We are glad that you found our mathematical formulations easy to follow. We agree that mathematical formulations can be dense, but we think they were all needed to provide the reader with a thorough understanding of our approach and answer any question that might arise about it. We chose to delegate most of the mathematical analysis to the appendix to allow the reader to understand the main contributions easily and then dive into more details about the specifics in the appendix.
>• Line 188-190: “RTUs can be 189 seen as a small generalization of LRUs, moving away from strict nonlinearity” shouldn’t this be linearity, since LRU use linear operations?
That’s correct. The sentence should be “RTUs can be seen as a small generalization of LRUs, moving away from strict linearity” | null | null | null | null | null | null |
Rethinking Memory and Communication Costs for Efficient Data Parallel Training of Large Language Models | Accept (poster) | Summary: The paper proposed an unified space to enhance ZeRO-based partitioning strategies, providing a better trade-off between memory and communication. The paper also introduced a more efficient (than ring-based) collective communication method. The core motivation of this paper is to fully leverage the efficient intra-group communication, by introducing intra-group partitioning as an extra option (PaRO-DP) and making the collective communication group-based (PaRO-CC).
The paper is well written and the core insight is delivered clearly. However, the novelty is limitted. For example,
- MiCS also improves the efficiency of ZeRO and collective communication based on a similar idea.
- The partitioning space is similar to Alpa, where the devices are placed into a 2D mesh, with intra-node devices as inner dimension.
- Megatron-LM suggests tensor parallelism should be within the same node generally.
Strengths: - The paper provides a systematic view of ZeRO-based partitioning strategies.
- The paper enlarges the option space of ZeRO strategies.
- The proposed PoRA-CC can improve the communication efficiency in practice.
- The problem is well formulated, easy to follow.
- The paper considers various senarios, including full/partial parameters training, which helps to understand the motivations.
- The experiments contain a large number of baselines, which makes it more convincible.
Weaknesses: I have some concerns about the title and claims.
- "Rethinking" is too ambitious for this paper, regarding it only focuses on ZeRO-based strategies and doesn't combine with any other stategies like TP and PP.
- "for Efficient Large Language Model Training" seems unsuitable. It lacks evidence to prove it's efficient for LLM, as mentioned in Limitations, 3D parallelism + ZeRO-1 is usually used in LLM training. Actually, it's not necessary to couple with LLM, because the proposed methods is general for most of the models.
- It's kind of overclaiming for "266% that of SOTA basic DP strategies", because it compares with ZeRO-3 in Figure 3(d), while MiCS has a similar performance.
For the experiments,
- ZeRO-1 is missing, which makes it incomplete and less convincible. ZeRO-1 is a commonly used strategy, especially when there are many microbatches. It would be better if ZeRO-1 is also included for some senarios.
- It would be better to compare PaRO-CC with the Hierarchical Communication Strategy in MiCS.
Figure 1 is a bit complicated.
Technical Quality: 2
Clarity: 3
Questions for Authors: - What's the difference between partial-parameters training and PEFT?
- If the scope of this paper is for LLM as in the title, could you please add more related work to elaborate the current state of LLM training?
- Can you elaborate the difference between PaRO-CC and the Hierarchical Communication Strategy in MiCS (or other previous work if there is any)?
Confidence: 3
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: How to effectively integrate with current 3D parallelism (or other model parallel techniques) in LLM is a big concern, which may limit its application. In 3D parallelism,
- ZeRO-1 is usually used to avoid per-microbatch communication, because PP requires a large number of microbatches to reduce pipeline bubbles.
- tensor parallelism also leverages the high-speed intra-node communication, which will be in conflict with the proposed methods.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your valuable and constructive feedback. We will explain your concerns point by point.
## Weaknesses:
**Q1: concerns about the title and claims**
**A1:** Thanks again for your insightful advice. We adjust the title to "Rethinking Memory and Communication Costs for Efficient Large Language Model Training in Data Parallelism" to better fit the study of this paper. As stated in the global rebuttal (Author Rebuttal), PaRO can be used with other n-D parallel strategies to accelerate the training of large models.
We will update the description of performance improvements with more specific and accurate language, like "PaRO improves the training speed of LLMs by up to 266% that of ZeRO-3 as basic DP strategy".
**Q2: For the experiments:**
a. ZeRO-1 is missing
b. It would be better to compare PaRO-CC with the Hierarchical Communication Strategy in MiCS.
**A2:**
a. We conducted a set of experiments comparing PaRO(NII) with ZeRO-1, using the 7B model on 32 GPUs. With the same effective batch size, the throughput of NII was found to be 48.7% higher than ZeRO-1. For detailed data, please refer to Table 1 in the PDF.
b. The difference between PaRO-CC and MICS's Hierarchical Communication Strategy (MiCS-HCS) is that we overlap inter-group and intra-group communication, and we do not need to rearrange data blocks, avoiding unnecessary memory operations. Details can be compared between Figure 2 in the PDF and the MiCS paper. And the collective communication time with 128 GPUs of PaRO-CC, MiCS-HCS and Ring are 162, 183 and 288 ms separately.
**Q3: Figure 1 is a bit complicated.**
**A3:** We have redrawn the figure, please refer to Figure 1 in the PDF.
## Questions:
**Q1: What's the difference between partial-parameters training and PEFT?**
**A1:** To tradeoff between training efficiency and statistical performance, model training may have diverse requirements of trainable parameters in different scenarios. The ratio (trainable-parameters/parameters) of partial-parameters and PEFT is 1/16 and 3/1000 separately. This difference will result in significant differences in memory usage and communication data volume, and the most suitable strategy is also different. Please refer to section 3.1.1 in our transcript and Table 9 in the Appendix for more information.
**Q2: If the scope of this paper is for LLM as in the title, could you please add more related work to elaborate the current state of LLM training?**
**A2:** As mentioned in Weaknesses A1 and global rebuttal (Author Rebuttal), we will adjust the title and give more description of PaRO usage in LLM training.
**Q3: Can you elaborate the difference between PaRO-CC and the Hierarchical Communication Strategy in MiCS?**
**A3:** Same as Weaknesses A2.b.
## Limitations:
**Q1: How to effectively integrate with current 3D parallelism (or other model parallel techniques) in LLM is a big concern, which may limit its application.**
**A1:** As a future work, we currently working on further utilising PaRO in n-D parallel training for more large-scale LLM training. We find it is effective in two scenarios in the latest n-D parallel training:
1. Scenario 1 is when there is a difference in bandwidth between intra-group and inter-group communication, which is common in GPU clusters. n-D hybrid parallelism usually performs intra-node TP and inter-node PP in a subgroup of nodes, and nesting DP or PaRO in the outer groups. However, the grouping criteria is no longer one machine per group, but rather based on the network topology of the cluster. For example, in a multi-layer switch scenario, grouping can be classified based on the lower-layer switches. For another example, when training with multiple Nvidia SuperPods or similar infrastructures, one SuperPod can be used as a group.
2. Scenario 2. In a vast number of GPU scenario, n-D parallelism usually uses TP + PP + DP for efficient training of over ten thousand GPUs. Following the same principles as discussed earlier, PaRO could provide a more flexible strategy as an alternative to DP in such cases.
Please refer to global rebuttal (Author Rebuttal) for more detail.
Thank you again for taking the time to review our paper. We hope we have addressed the reviewer's questions and the reviewer is willing to raise their score. Please let us know if we can provide any additional clarification or information.
---
Rebuttal Comment 1.1:
Comment: Thanks for the response.
Most of my concerns are well addressed, except for the part of integrating with 3D parallelism (the statement itself already verifies its application is limited to specific scenarios). I'm not saying it's useless in 3D parallelism, and I do believe there must be some scenarios it can help. However, it's still a big concern how to combine with current methods because intra-node communication, which is the most common setting PaRO helps, is already well utilized.
Another concern is the novelty (refer to my summary). Although I appreciate the paper provides a systematic implementation for various DP strategy, for me PaRO-DP doesn't provide too much new insights. Many previous work leverages intra-group communication to accelerate training/inference. For the systematic part, PaRO-DP is kind of like a subset of Alpa. Actually for me the novelty mainly comes from PaRO-CC (although most of text are describing PaRO-DP), but I'm not quite familiar with the literature in this direction. Could you please clarify more about the novelty/originality?
---
Reply to Comment 1.1.1:
Title: Response for Reviewer VsLY
Comment: Thank you for your timely questions, which are closely related to most latest research on LLM training.
# 1. Regarding the first concern related to n-D parallelism, we present four reasons why our PaRO strategy is more effective:
## Reason 1: A DP-based method (e.g. ZeRO, PaRO) is preferred over a TP-based method when the model fits into memory.
We agree that TP is necessary for partitioning weights in extremely large models. However, for moderately large models like the 65B model, we argue that the DP-based method is preferable (These models are more commonplace when it comes to practical business applications of LLM due to the cost of extremely large LLM models and they also require efficient training over terabytes data). It includes:
**a). The computational efficiency of TP degrades due to the subdivision of matrix multiplications, and b). it is challenging to overlap communication and computation. c) TP requires significant changes to the model implementation compared to the DP-based method, making it difficult to use.**
## Reason 2: In a scenario where the TP+DP-based method is used, PaRO owns more scalability.
PaRO can be directly used as an alternative to DP in TP + DP-based methods, as demonstrated in reason 4. However, from another aspect, TP+DP typically uses DP (e.g. ZeRO) for inter-node communication, while utilizing TP for intra-node communication. In contrast, PaRO uses ZeRO for inter-group communication and still ZeRO for intra-group communication. **The main difference lies in whether TP or ZeRO is used for intra-group communication and the scope of the group. TP requires higher intra-group network requirements compared to DP-based methods, due to the much higher communication volume of activations. This necessitates that the intra-group be intra-node to take advantage of high-speed NVLink in TP.** PaRO, on the other hand, furthers the group to be inter-node as well, due to its lower communication cost. This flexibility is particularly beneficial for longer sequences with larger activations.
## Reason 3: PaRO can be orthogonally integrated with Sequence Parallelism (SP), unlike TP.
The current SP can be categorized based on whether it partitions over the head dimension (e.g. Deepspeed-ulysses) or the sequence length dimension (e.g. Ring-SelfAttention/Contextual Parallelism). **However, combining TP with the Deepspeed-ulysses-like method is challenging due to both partitioning over the head dimension.** On the other hand, the PaRO like DP method can be integrated orthogonally with both of these two sequence parallelism approaches. For example, under PaRO, the Deepspeed-ulysses style head parallelism can be leveraged for intra-node parallelism, which requires all-to-all operations and significant demand on network topology. Meanwhile, the Ring-attention style context parallelism can be applied for inter-node parallelism.
## Reason 4: PaRO can still enhance training efficiency in 4D parallelism when training extremely large LLMs.
When training extremely large models, PaRO still improves training efficiency. For example, in Llama 3.1, 4D parallelism is employed in the order of [TP, CP, PP, DP], where PaRO can be directly used as an alternative to DP. This improvement in efficiency is achieved in two ways. **Firstly, it reduces the number of participants in a communication collective by grouping, even though expensive communication is still required over 128 nodes in Llama 3.1 (DP=128). Secondly, it can leverage the heterogeneous network in a multi-layer switch scenario as discussed.**
# 2. Regarding the second concern related to Alpa, we contend that the comparison is misplaced; our solution cannot be identified through Alpa's automatic parallelism methodology.
Firstly, Alpa primarily emphasizes searching for an optimal hybrid parallel strategy under various **existing parallelism techniques** including **Intra-Operator** Parallelism (e.g., DP, TP & ZeRO) and **Inter-Operator** Parallelism (e.g., PP). It does not propose a new parallelism strategy but leverages the existing parallelism techniques, where new parallelism techniques like PaRO or Sequence parallelism are not included. In contrast, PaRO proposes a new parallelism strategy, which primarily focuses on optimizing DP across different training parameter scenarios. The grouping optimization scheme employed by PaRO is not encompassed within an automatic parallelism framework, such as Alpa. Consequently, PaRO can serve as a complementary component to Alpa.
---
Rebuttal 2:
Comment: Reason 3 is reasonable to me, because Deepspeed-ulysses is another widely used technique, where PaRO can be applied (but the specific strategy is still not clear).
> It does not propose a new parallelism strategy but leverages the existing parallelism techniques, where new parallelism techniques like PaRO or Sequence parallelism are not included
I don't think this claim is correct. Although Alpa doesn't propose any new parallelism strategy explicitly, most existing strategies including PaRO and Sequence parallelism lie in the solution space of Alpa. The main difference is, Alpa uses an op-level space while PaRO uses a model-level space (only sharding batch dimension), which means PaRO has a much smaller space so that all solutions can be enumerated while Alpa needs an ILP to search for the optimal solution. There may be some tiny differences in the details, but conceptually, PaRO is a subset of Alpa.
Given my concerns about integrating with other methods are partially addressed and the systematic implementation should be helpful to the community, I'm willing to increase my score from 4 to 5. But I still think the paper needs to be improved regarding the above discussions. More experiments about integrating with other methods to prove its efficiency would be appreciated.
---
Rebuttal Comment 2.1:
Title: Response to Reviewer VsLY
Comment: Thanks for your response. Alpa stands out as one of the most significant advancements in recent years for automatic parallelism training. It uses a fine-grained, operator-level optimizer on heterogeneous networks, but struggles to do a better job of optimizing for the structure of transformer-based models [referred to H3T (NIPS, 2023)]. As you mentioned, if we don't consider those differences in the details, PaRO or recent sequence parallelism works may exist in Alpa's search space. However, we argue that even with this consideration, Alpa's objective function remains simplified, ultimately leading to a suboptimal solution due to the omission of several key components.
1) **Alpa does not explicitly incorporate memory constraints into its optimization objective, even though these constraints are crucial for training large language models (LLMs) and are challenging to model within the objective function** [referred to MiCS (VLDB, 2022), which corresponds to III strategy in PaRO]. For example, while MiCS(III) and PaRO-IIG have comparable communication costs, IIG stands out for its lower memory requirements. This advantage enables IIG to support a larger batch size, thereby increasing throughput, as demonstrated in Table 8 of the appendix in the manuscript. In contrast, the Alpa method fails to account for these differences. Additionally, there are several similar scenarios when tradeoff the memory and communication costs as discussed in our manuscripts.
2) **Alpa does not explicitly consider the computation and communication overlapping in their optimization objective function.** Overlapping is mainly used by different hand-craft parallelism techniques to improve training efficiency. For example, Ring-SelfAttention saves memory for key-value (KV) pairs but incurs extra communication overhead, which is in contrast to the objective function of Alpa. Meanwhile, it optimizes efficiency by overlapping communication and computation using a ring-style peer-to-peer approach, alongside an incremental softmax inspired by FlashAttention.
Overall, mainstream LLM training uses hand-crafted parallelism (e.g. 4D-parallelism in Llama 3.1) rather than automatic parallelism such as Alpa.
Thank you again for your insightful suggestions and we will refine our manuscript based on our discussions. For the integration of PaRO with other parallelism, we are currently working as part of our future research and it requires significant workloads. | Summary: This paper introduces the Partial Redundancy Optimizer (PaRO) to improve the efficiency of training large language models (LLMs) by optimizing the trade-off between memory and communication costs. PaRO includes two main strategies: PaRO Data Parallelism (PaRO-DP), which refines model state partitioning and training procedures, and PaRO Collective Communications (PaRO-CC), which rearranges the communication topology for faster collective operations. The proposed strategies demonstrate significant improvements in training speed, up to 266% over state-of-the-art (SOTA) data parallel strategies, and 17% when applied independently to model parallel strategies like Megatron.
Strengths: - PaRO enables much more fine-grained parallelism strategy, compared to ZeRO. This facilitates much more optimized training performance in broad range of resource setup.
- Not only partitioning strategy, but this paper also proposes PaRO-CC, intra-group-aware collective communication operation.
- The paper provides extensive experiment results, including the training convergence.
Weaknesses: - In full parameter training, 512 sequence length is too small. Do you have much longer sequence length, for example, 4K or 8K?
- The latest Megatron supports more efficient training schemes. It would be helpful to show if PaRO can still achieve significantly higher training performance even when considering these methods.
Technical Quality: 2
Clarity: 3
Questions for Authors: - How PaRO determines the group size (G)? Is it a hyper-parameter?
- In the experiments, do other well-known computation optimizations (e.g., FlashAttention) are applied?
- How can users enable PaRO? Could you provide an example of the programming interface?
Confidence: 3
Soundness: 2
Presentation: 3
Contribution: 3
Limitations: No additional limitations exist.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your postitive and constructive feedback. We will explain your concerns point by point.
## Weaknesses:
**Q1: Do you have much longer sequence length, for example, 4K or 8K?**
**A1:** We are currently conducting experiments with longer sequence lengths, but we have not had time to obtain the experiment results. Theoretically, longer sequence lengths should not significantly increase communication volume (only parameters and gradients are communicated in DP strategy) and are anticipated to result in similar performance improvements.
**Q2: The latest Megatron supports more efficient training schemes. It would be helpful to show if PaRO can still achieve significantly higher training performance even when considering these methods.**
**A2:** PaRO could be an alternative to DP in the latest n-D parallel training and advance the training effiency. Please refer to global rebuttal (Author Rebuttal).
## Questions:
**Q1: How PaRO determines the group size (G)? Is it a hyper-parameter?**
**A1:** The PaRO will automatically partition groups according to the nodes, making full use of the communication advantages within the nodes. It also supports other group sizes, such as using the lower-layer switch as the grouping basis in multi-layer switch networks, and of course, custom group sizes can also be defined.
**Q2: In the experiments, do other well-known computation optimizations (e.g., FlashAttention) are applied?**
**A2:** In order to avoid introducing more factors, we did not use optimization schemes such as flash-attn and quantization. But for the LLaMA-65B model, we activated checkpointing to ensure successful training.
**Q3: How can users enable PaRO? Could you provide an example of the programming interface?**
**A3:** We provide the open-source release of our code in paper. To enable PaRO, you just need to add "paro_strategy": "NIG" in the zero_optimization dict in the original ds_config.json, and start the training the same way as with deepspeed. Configuration examples can be found in the README.md file in the code repository.
Thank you again for taking the time to review our paper. We hope we have addressed the reviewer's questions and the reviewer is willing to raise their score. Please let us know if we can provide any additional clarification or information. | Summary: This paper recast basic known distributed training strategies (such as the ZeRo-1, 2, 3, MiCS, FSDP) in a unified framework that takes into account the trade-off between memory consumption and communication. By exhibiting natural levels of granularity for the partionning of different parts of the model and optimizer, this paper makes an exhaustive list of all 27 strategies possible for the distributed training of LLMs (that include previously known ones). Then, it proposes a simple model to predict the throughput and feasibility of all strategies based on physical measurements of the cluster used and task at hand, allowing to chose the fastest feasible distributed training strategy. This model is shown to predict effectively actual performances of LLMs training, allowing to bring significant speedup compared to previous "go-to" methods in some cases.
Strengths: * **Exhaustive enumeration of possible distributed training strategies in a unified framework:** The 3 level of granularity for the partitioning of some “vector” are proposed, and seem natural in the context of GPU cluster (“no partition = copy on all worker”, partitioned inside a cluster node, or across all workers). 3 different model+optimizer parts are considered (model parameters, gradients, and optimizer’s state) as depending on the task and the use of mixed precision, these are treated differently. With the 3 level of granularity, and 3 type of vectors, it leads to $3^3=27$ possibilities for distributed strategies.
* **Simple model to predict the throughput of a given strategy:** Based on simple physical characteristics of the GPU cluster used (e.g., intra and inter node communication bandwidth, GPU memory) and the task (pretraining, parameter-efficient finetuning), a very simple model is proposed to predict what would be the fastest feasible distributed training strategy.
* **Model is shown to highly correlate to actual implementation throughput:** In part 4.2, through extensive experimentation, the model is shown to highly correlate to actual time measurements when implementing the different strategies.
* **Significant speedup can be observed in practice by using the recommended strategy:** Thanks to this model, the best feasible strategy can be chosen in advance, which can be a completely new one compared to previously known ones (such as ZeRo). In the experiments performed, this can lead to significant speedup compared to standard methods.
Weaknesses: * **Figures a bit hard to understand:** I found Fig.1 \& 2 hard to read and to understand.
* **Experiments in Part 4.2 do not seem exhaustive:** in line 287, 3 configurations out of the 14 considered are dismissed, leading to 11 remaining possible experiments. Yet, only 9 are reported in Figure 3, why is this the case?
* **Comparison with standard collective communication strategies other than "the ring topology" is not done:** The only collective communication strategy investigated (other than the one proposed by the authors) is the one based on the ring topology. However, this is not the only standard implemented in the industry. For instance, [NCCL, in addition to "Ring all-reduce" also propose "Tree all-reduce", which can allow significant speedup at scale compared to the ring option]( https://developer.nvidia.com/blog/massively-scale-deep-learning-training-nccl-2-4/ ) (NCCL choses automatically which to use depending on the network/task, but it is possible to force a particular strategy). Moreover, other strategies such as Butterfly All-Reduce [[Patarasuk et al., 2009]](https://www.cs.fsu.edu/~xyuan/paper/09jpdc.pdf ) or Moshpit All-Reduce [[Ryabinin et al., 2021]]( https://arxiv.org/pdf/2103.03239 ) are not considered either.
Technical Quality: 4
Clarity: 3
Questions for Authors: * line 228: *“Assuming that communication and computation can be fully overlapped, $T$ can be approximately regarded as the maximum of $t_{comm.}$ and $t_{comp.}$”* Could you be more specific when this assumption is actually valid in practice?
* line 237: *“the formula of $T$”* Do you mean $\max \\{ t_{comp.}, t_{comm.} \\}$?
* line 287: (throughput indicator) It not clear at first that this is the “TPS indicator” (otherwise not defined) in the subsequent figures, I would advice indicating it at this stage.
* Fig 7: Small variations between the losses for the different methods are observed, although they should be mathematically identical, why is that?
**Suggestion:**
In the paragraph line 232, I would advice to precise that the formulas (based on physical characteristics of the cluster/task) to estimate any $t_{\times \times}$ are provided in appendix A3.
Confidence: 4
Soundness: 4
Presentation: 3
Contribution: 4
Limitations: .
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your positive and constructive feedback. We will explain your concerns point by point.
## Weaknesses:
**Q1: Figure.1 & 2 a bit hard to understand.**
**A1:** We have redrawn the figures, please refer to Figure 1 & 2 in the PDF.
**Q2: Experiments in Part 4.2 do not seem exhaustive.**
**A2:** We choose the best strategy for specific training scenarios based on the guidelines and verify it in our experiments.
**Q3: Comparison with standard collective communication strategies other than "the ring topology" is not done.**
**A3:** I appreciate you bringing this up.
The Butterfly topology sends and receives large data blocks each time, which makes it difficult to fully utilize the bandwidth and prone to delay jitter. Its performance is not as good as the Ring topology.
Compared to the Ring topology, the Tree topology in NCCL speeds up communication primity locally. We focus on the huge difference in communication bandwidth between inter-group and intra-group interactions. Group communication is utilized to minimize inter-group communication and maximize intra-group communication. The communication can utilize either a ring or tree structure.
The Moshpit topology is a group communication strategy for allreduce that solves cluster instability and node failure problems through an iterative averaging protocol. Our algorithm combines multi-level sharding of large models and fully utilizes bandwidth between and within machines through a grouping strategy to improve communication.
## Questions:
**Q1: line 228: “Assuming that communication and computation can be fully overlapped, 𝑇 can be approximately regarded as the maximum of 𝑡𝑐𝑜𝑚𝑚. and 𝑡𝑐𝑜𝑚𝑝.” Could you be more specific when this assumption is actually valid in practice?**
**A1:** When training transformer architecture models, the next layer can prefetch parameters while the previous layer is computing, allowing computation and communication to happen simultaneously. Here we simplify and assume perfect overlap, meaning the training time for a single step is determined by the longer of the two, computation or communication.
**Q2: line 237: “the formula of 𝑇” Do you mean max{𝑡𝑐𝑜𝑚𝑝.,𝑡𝑐𝑜𝑚𝑚.}?**
**A2:** Yes. We will make it a numbered formula in our manuscript for better clarification.
**Q3: line 287: (throughput indicator) It not clear at first that this is the “TPS indicator” (otherwise not defined) in the subsequent figures, I would advice indicating it at this stage.**
**A3:** Yes, throughput indicator(log(1/T)) is the “TPS indicator” in figures. We will declare this in the subsequent version.
**Q4: Fig 7: Small variations between the losses for the different methods are observed, although they should be mathematically identical, why is that?**
**A4:** Thanks for pointing this out. We have verified the values of the model parameter, gradient, and optimizer state before and after an update step, and the errors of different strategies remain within the normal machine precision range (1e-6). Small variations between the losses for different methods, despite their mathematical equivalence, can be attributed to floating-point representation errors in computers. These truncation errors accumulate during different steps of the calculations. As a result, the outputs may show slight discrepancies. However, these factors do not impact the statistical performance during training or the consistency in convergence behavior. The phenomenon is also consistent across ZeRO-1/2/3.
Thank you again for taking the time to review our paper. We hope we have addressed the reviewer's questions and the reviewer is willing to raise their score. Please let us know if we can provide any additional clarification or information.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their efforts in their rebuttal, which answered some of my concerns. Given the fact that I think this work could appeal the community and spark interesting discussions, I raise my score by one point. | null | null | Rebuttal 1:
Rebuttal: We thank the reviewers for these insightful comments and constructive advices. We offer a general response here and respond to each reviewer individually.
The proposed PaRO can be used as a standalone DP strategy or combined with other parallel strategies for n-D parallel training. PaRO-CC communication optimization can also be widely applied to distributed training strategies with global collective communication operations. Next, we'll introduce the practical applications of PaRO.
Firstly, PaRO is not only effective in scenarios where there is a difference between intra-group and inter-group communication costs, but it is also particularly useful when there are a large number of nodes and GPUs (e.g. more than thousands of GPUs). This is because it becomes difficult to maintain a linear speedup ratio for collective communication operators when the number of GPUs is particularly large. The Pytorch community has recently proposed Hybrid Sharding Data Parallel (HSDP) as a 2D strategy for performing FSDP within a host and DDP across hosts to reduce the number of nodes involved in collective communication. While HSDP aims to solve the same problem as PaRO, it was not as mature at the time of drafting this article and therefore was not compared. Additionally, due to hardware limitations, scenarios with more than thousands of GPUs were not tested. Lastly, as demonstrated in our transcript, PaRO provides more flexibility to more complicated machine learning systems, such as distributed RLHF systems.
As a future work, we currently working on further utilising PaRO in n-D parallel training for more large-scale LLM training. We find it is effective in two scenarios in the latest n-D parallel training:
* Scenario 1 is when there is a difference in bandwidth between intra-group and inter-group communication, which is common in GPU clusters. n-D hybrid parallelism usually performs intra-node TP and inter-node PP in a subgroup of nodes, and nesting DP or PaRO in the outer groups. However, the grouping criteria is no longer one machine per group, but rather based on the network topology of the cluster. For example, in a multi-layer switch scenario, grouping can be classified based on the lower-layer switches. For another example, when training with multiple Nvidia SuperPods or similar infrastructures, one SuperPod can be used as a group.
* Scenario 2. In a vast number of GPU scenario, n-D parallelism usually uses TP + PP + DP for efficient training of over ten thousand GPUs. Following the same principles as discussed earlier, PaRO could provide a more flexible strategy as an alternative to DP in such cases.
It is worth noting that the reason why this article uses "intra-group" and "inter-group" instead of "intra-machine" and "inter-machine" to describe network topology is precisely because the grouping criteria for network topology can be diverse, as is the case when using n-D parallelism.
It is also worth emphasizing again that PaRO is a non-intrusive distributed training solution for LLM training as ZeRO. We believe it is one of the few feasible non-intrusive acceleration solutions for more than thousands of GPUs.
Pdf: /pdf/7e0257a957136d0623e2ce14f1067c8f8a8f77c0.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Provably Optimal Memory Capacity for Modern Hopfield Models: Transformer-Compatible Dense Associative Memories as Spherical Codes | Accept (poster) | Summary: Suggest a method to optimize kernel versions of Modern Hopfield Models by mapping memories onto well-separated values on a sphere in feature space. Shows that the spherical arrangement of memories is optimal for retrieval (e.g., maximize capacity) and thus can improve upon the linear kernel [Wu et al. 2024a], which optimises memory separation.
Strengths: * Originality: the idea presented is quite simple and intuitive, but the perfected procedure is non-trivial and can potentially improve memory performance across a large family of relevant MNM-s.
* Clarity: the submission is clearly written and well organized.
Weaknesses: * Quality: the algorithm's main justification is a lower bound that is not shown to be tight, so optimizing the minimal overlap between memories is not well supported (here).
* Quality: if the main claim of the paper is improving the capacity to store memories, I would expect a numerical validation that the algorithm improves the number of memories that may be stored or increases the amount of noise that may be tolerated. The included results (sections 3.1 and 3.2) are only qualitative and are achieved on toy problems. Such a comparison can be done relative to plain-vanilla dense MHM or to [Wu et al. 2024a].
* Significance: the results are mostly of practical importance for practitioners who wish to implement MHM with an optimal kernel. For that crowd, it is not that important if the algorithm is based on a novel lower bound but that the algorithm works, which needs to be better established.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. How does the lower bound on memory capacity (lemma 2.1) contribute to the derived algorithm? It’s dependance on R and D of the embedding, but those are not optimized for in the algorithm. Furthermore, it doesn’t depend on the minimal memory separation.
2. What is Lambda in Definition 2.8, Theorem 2.1?
3. Is the lower bound from definition 2.7 tight? Can you establish it is not vacuous?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: The authors have presented a theoretical approach based on a lower bound for capacity, which may not directly translate to improvement in task performance and has not shown such improvements through experiments.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ### Weakness
> **W1:** the algorithm's main justification is a lower bound that is not shown to be tight, so optimizing the minimal overlap between memories is not well supported (here)....:
**Response:**
Sorry for the confusion caused. We want to emphasize that the inequality in Def. 2.7 is a condition instead of a lower bound. We understand the submitted draft is a bit ambiguous about this. We have made the following modifications to clarify.
We decompose Def.2.7 into two separate definitions:
* >**Definition 1 (Well-Separation Condition).**
Given a set of memory patterns
`$\Phi(\mathcal{V}) = \{ \Phi(v_\mu)\}_{\mu=1}^M \subseteq \mathbb{S}^{D_\Phi-1}$`, we say the memory pattern $\Phi(v_\mu)$ satisfies the well-separation condition is the following holds:
$$
\Delta_{\mu}^\Phi \geq \frac{1}{\beta} \ln\( \frac{ 2(M-1)}{R_\Phi}\).
$$
* > **Definition 2. (Memory Code).** Given a finite set of points `$\Phi(\mathcal{V}) = \{ \Phi(v_\mu)\}_{\mu=1}^M \subseteq \mathbb{S}^{D_\Phi-1}$`, and $\beta > 0$.
We say $\Phi(\mathcal{V})$
is a memory code if all points in $\mathcal{V}$ satisfies the *well-separation* condition, i.e.
$$
\Delta_{\mu}^\Phi \geq \frac{1}{\beta} \ln\( \frac{ 2(M-1)}{R_\Phi}\),
\quad\forall \mu \in [M].
$$
Further, we denote $\Lambda$ as the set of all memory codes in $\mathbb{S}^{D_\Phi - 1}$, such that
$$
\Lambda := \[ \Phi(\mathcal{V}) \quad| \quad \Phi(\mathcal{V})\quad \text{is a memory code} \].
$$
We also added intuitive explanations:
* > Note that this inequality is a desired property of the code/data which benefits memory storage. In the following section, we demonstrate methods to achieve such property for all different $\mathcal{V}$.
**Moreover, our Algorithm 1 is actually built on an upper bound, given at the inequality in `line 585`.**
We hope these clarifications have addressed your concern.
> **W2:** If the main claim of the paper is improving the capacity to store memories, I would expect a numerical validatio...
**Response:**
Thank you for pointing that out. We recognize that many reviewers have identified the same problem of lacking experiments. **In response, we have added and *multiple instance learning* and *memory retrieval* experiments to support our main result.**
Please see Table 2 and Figure 3 of the attached pdf.
Importantly, these new experimental results indicate
* KHMs improves the retrieval outcome by a large margin.
* U-Hop+ improves the MHM based model in multiple instance learning tasks.
> **W3:** Significance: the results are mostly of practical importance for practitioners who wish to implement MHM with an optimal kernel. For that crowd, it is not that important if the algorithm is based on a novel lower bound but that the algorithm works, which needs to be better established.
**Response:**
The main purpose of this paper is to
1. Identify the quantity of optimal memory capacity
2. Justify a theoretical grounded algorithm to achieve optimal capacity.
We want to emphasize that, for practical usage, **the method provides empirical improvement with or without our justification.** We will better establish this in our experiment section. Please see attached pdf for more validations.
### Questions
> **Q1:** How does the lower bound on memory capacity (lemma 2.1) contribute to ...
**Response:**
Two main reasons for us to demonstrate Lemma 2.1:
* Showing the traditional approach in hopfield is all you need does not work under the flexibility provided by KHMs.
* Demonstrating the exponential scaling behavior in $D_\Phi$, which corresponds to the quantity $M^\star$. To emphasize this relationship, we added the following proposition and description.
>> **Proposition.** Following Theorem 2.1, let
$\theta \in \( 0, \frac{\pi}{2} \)$, we have
$$
e^{ \varphi( \theta ) D_\Phi (1 + o(1)) } \
\geq
M^\star
\geq
(1 + o(1)) \sqrt{2 \pi D_\Phi} \cdot
\frac{ \cos{\theta} }{ \sin^{D_\Phi-1}{ \theta} },
$$
where $\theta = \arccos\(\rho( C_{opt} )\)$, and $\varphi(\theta) = -\log \sin{\theta}$.
And a remark
>> **Remark.** We have $M^\star = O(c^{D_\Phi})$, for some $c > 1$. This property echoes the exponential capacity lower bound in Lemma 2.1.
---
> **Q2:** What is Lambda in Definition 2.8, Theorem 2.1?
**Response:** Lambda is defined in the last line of our Definition 2.7. It refers to the set containing all memory codes.
> **Q3:** Is the lower bound from definition 2.7 tight? Can you establish it is not vacuous?
**Response:**
The inequality in Definition 2.7 does not serve as a lower bound. Instead, it is a condition of data that we want to achieve with Algorithm 1 because the existence of this condition benefits memory storage. The goal of algorithm 1 is to make this condition hold for all memories in feature space.
We would also like to highlight that this is a commonly used storage condition in [1, 2, 3, 4], showing that it is a well established/recognized condition.
* [1] Hopfield Netorks is all you need. ICLR 2021
* [2] On Sparse Modern Hopfield model. NeurIPS 2023
* [3] Uniform Memory Retrieval with Larger Capacity for Modern Hopfield Models ICML 2024.
* [4] Outlier-Efficient Hopfield Layers for Large Transformer-Based Models
===
Thank you for your suggestions and attention to details!
We hope that our responses adequately address your concerns, and we look forward to any further feedback.
Thank you again for your time and consideration!
---
Rebuttal Comment 1.1:
Title: Response to rebuttal
Comment: I appreciate the author's improvements in the manuscript and believe my main concerns have been addressed. I'm especially impressed by the added experiments, which, to me, provide important empirical support to the mathematical justification. I will raise my score accordingly.
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer vthX,
We are happy to hear that our revisions have addressed your concerns.
Thank you again for your constructive comments, which are pivotal in improving our draft and presenting a clearer view of this work We truly appreciate your thorough review.
Best,
Authors | Summary: This work provides a theoretical analysis of the optimal capacity on both Modern Hopfield Networks (MHNs) and Kernelized Hopfield Networks (KHNs). Specifically, it seeks to answer four fundamental problems:
1. How does memory capacity affect the gap between memories?
2. Given a fixed feature dimensionality, how to select an optimal separation function?
3. Is there a simpler and more efficient algorithm which optimize the kernel?
4. How to connect the gap between memories and the optimal capacity of KHNs theoretically?
Through the viewing of the memory set as a spherical code, the work establishes the problem of storing memories as a hard-max problem of maximizing the number of patterns stored, while ensuring that the angle between them is at least a certain constant (which the work does show). Consequently, this portrayal of the problem enables a simpler algorithm to optimize the kernel in contrast to the previous work UHop. Furthermore, the work illustrates that the optimal memory capacity is strongly dependent on the projection function $\phi$, such that its optimality maximizes the gap (or angle) between each pair of memories.
Strengths: The formulation of the problem presented in the work is great while the proofs are easy to follow. Moreover, the work achieves its goals in answering its four fundamental questions while providing a nice associative memory framework, which could possibly also be applied to other MHNs.
Weaknesses: There is a lack of experimentation in this work. It would be great for the work to have comparisons, similar to those detailed in [Uhop](https://arxiv.org/pdf/2404.03827).
Technical Quality: 4
Clarity: 4
Questions for Authors: For figure 4, what is the gray dot in the center of the circle? Is it an image (of a class) in CIFAR or just a point indicating the center?
For the derivative of the RHS of the bound, $\nabla^\phi_\text{min} + \frac{1}{2\beta} \ln (\frac{1}{2} \nabla^\phi_\text{min}) \geq \frac{1}{\beta} \ln (2 (M - 1))$, is it not $\frac{1}{\beta (M - 1)}$ instead of $\frac{1}{2\beta M}$?
Is it possible to see the loss curve for MNIST when optimizing $\mathbf{W}$? It would be great to see the stability of the new $\mathcal{L}$ just to get a sense of $O(\frac{1}{-4tR^2 - \mathcal{L}^*_\phi(\Theta)})$.
Confidence: 3
Soundness: 4
Presentation: 4
Contribution: 4
Limitations: There is a significant lack of experimentation (or comparisons) in this work. Other types of MHN might require modifications or a different framework to analyze their optimal capacity and compatibility with KHNs. The feature map is rather simple --- it is a linear affine function. Lastly, the spherical code framework only considers normalized points on the hypersphere.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > **W1:** There is a lack of experimentation in this work. It would be great for the work to have comparisons, similar to those detailed in Uhop.
**Response:**
Thank you for pointing that out. We recognize that many reviewers have identified the same problem of lacking experiments. **We have added additional memory retrieval and multiple instance learning experiments to support our main result.**
The new multiple instance learning results and retrieval results are in Table 2 and Figure 3 of the attached pdf.
> **Q1:** For figure 4, what is the gray dot in the center of the circle? Is it an image (of a class) in CIFAR or just a point indicating the center?
**Response:**
Yes, sorry for the confusion. And thanks for your attention in details! We meant to use the dot to indicate the center of the circle, it is not an image in CIFAR. We will remove it in our final version.
> **Q2:** For the derivative of the RHS of the bound, …
**Response:**
Yes, you are correct. Again, thanks for your attention in details! We have modified our proof accordingly.
Meanwhile, this correction does not affect our conclusion that the RHS is increasing in M.
> **Q3:** Is it possible to see the loss curve for MNIST when $W$ optimizing? It would be great to see the stability of the new $\mathcal{L}$ just to get a sense of ...?
**Response:**
Yes. Thanks for the suggestion. We provide the visualization of the loss curve for MNIST in Figure 4 of the attached pdf.
===
Thank you for your suggestions and attention to details!
We hope that our responses adequately address the reviewer's concerns, and we look forward to any further feedback.
Thank you for your time and consideration!
---
Rebuttal Comment 1.1:
Comment: Thank you to the authors for their detailed response and hard work. Based on the provided the results, I am happy enough to raise my score to 7.
Side comments:
Yes, indeed the slight in error in RHS of the bound does not change the main statement.
The error bars of each line in the plots provided in the response are based on standard error? If not, see if plotting the error bars as standard error could help aesthetically.
Cheers.
---
Reply to Comment 1.1.1:
Comment: Thank you again for your constructive comments. We truly appreciate your thorough review!
We will take the aesthetic suggestion into consideration. Thank you! | Summary: The paper is a theoretical analysis of the memory capacity of MHMs and KHMs. It establishes a connection between memory capacity and spherical codes. A method for approximating optimal memory capacity is introduced. Experiments are conducted to validate the theoretical findings.
Strengths: The paper is well written and easy to follow.
The memory capacity is an important problem that can have a significant impact on SOTA architectures like Transformers.
The paper is an interesting theoretical discussion and shows a connection between memory and spherical codes in an original manner.
The theoretical framework and parts of the proofs are based on [Ramsauer et al., 2020] with adaptions to KHMs.
These previous concepts are extended to get results for optimal memory capacity and to introduce an objective for optimal memory capacity.
This objective is relaxed by an "average separation loss" introduced in [Wu et al, 2024a].
Weaknesses: The optimization procedure of U-Hop is similar U-Hop+ i.e., Gradient Descent is replaced by PGD.
Due to this similarity U-Hop should have been added as baseline.
Since in the standard case MHMs use learnable weight matrices (as can be seen in Eq. 10 of [Ramsauer et al., 2020] and other publications using MHMs) there should have been
a comparison to these MHMs using gradient descent based on the average separation loss.
While memory capacity is an important property it is not immediately clear how much it helps for learning tasks. Therefore, experiments in similar vein to [Wu et al, 2024a]
should have been conducted on these tasks.
Technical Quality: 3
Clarity: 3
Questions for Authors: How do you make sure that in Algorithm 1 the weight matrix W keeps the full column rank property from Definition 2.7?
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > **W1:** The optimization procedure of U-Hop is similar U-Hop+ i.e., Gradient Descent is replaced by PGD. Due to this similarity U-Hop should have been added as baseline. Since in the standard case MHMs use learnable weight matrices (as can be seen in Eq. 10 of [Ramsauer et al., 2020] and other publications using MHMs) there should have been a comparison to these MHMs using gradient descent based on the average separation loss.
**Response:**
Thank you for pointing that out. The short answer to this is — **Equation 10 in [Ramsauer et al., 2020] does not ensure a well-defined retrieval update of an associative memory model.**
For reasons:
* Equation 10 will only be a legal update rule for auto-associative memory if $W_V$ is excluded, with $W_K Y, W_Q R$ being memory and query, instead of $Y, R$. The consideration of hetero-associative memory is beyond the scope of this paper.
* Equation 10 without $W_V$ minimizes the energy function of $W_K Y$ and $W_Q R$, instead of real data $Y$, $R$. The retrieved pattern will be in $W_K Y$ space instead of $Y$ space. Thus, under MHM construction, we are not able to separate $W_K$ and $Y$.
> **W2:** While memory capacity is an important property it is not immediately clear how much it helps for learning tasks. Therefore, experiments in a similar vein to [Wu et al, 2024a] should have been conducted on these tasks.
**Response:**
We recognize that many reviewers have identified the same problem of lacking experiments. In response, **we have added memory retrieval and multiple instance learning experiments to support our main results.**
In our experimental result, U-Hop and U-Hop+ show similar results, but the theoretical justification of algorithm 1 makes this paper theoretically grounded.
For more experimental proofs, we have added the multiple instance learning results and retrieval results in Table 2 and Figure 3 of the attached pdf.
> **Q1:** How do you make sure that in Algorithm 1 the weight matrix W keeps the full column rank property from Definition 2.7?
**Response:**
In our algorithm, we do not force W to be full-rank. Existing methods such as [1, 2] are able to satisfy this property in practice. We choose to only analyze the simple PGD case to keep the convergence analysis simple. **Since $\mathbf{W}$ is in a high-dimensional space, it is almost surely full-rank** (The probability of a randomly initialized matrix having full column-rank is 1). [Remark 2.1, Wu et al 2024] also explains this implication in detail. Thus we choose to not explicitly force this property in practice.
[1] McTorch, a manifold optimization library for deep learning
[2] Trivializations for Gradient-Based Optimization on Manifolds (NIPS)
===
Thank you for your time and feedback! We look forward to further discussions!
---
Rebuttal Comment 1.1:
Comment: Thanks to the authors for the clarifications. Thus, I will keep my score.
---
Reply to Comment 1.1.1:
Comment: Thank you! | Summary: Modern Hopfield Networks (MHNs) are limited because they require sufficient minimal separation $\Delta_{\min}$ for theoretical guarantee. Kernelized Hopfield Models (KHNs) mitigate this limitation by storing the memories in the feature space. In particular, theoretical analysis in [Wu et al., 2024a] uses a linear kernel. The current paper proposes an algorithm to find the "optimal" feature map using spherical coding. By solving a surrogate optimization problem whose solution converges to the HardMax problem asymptotically with vanishing temperature, the algorithm improves the memory capacity.
Strengths: MHNs emerge as a powerful tool for theoretical understanding of neural networks such as Transformers. The current paper builds upon KHNs and gives analysis of necessary conditions for KHNs to achieve optimal memory capacity. Further, a sub-linear time algorithm is proposed to optimize the feature map.
Weaknesses: Prior work on KHNs already has analysis using linear kernel. It would be helpful to showcase the improvement brought by the current algorithm to improve clarity and contextualization relative to prior work. Moreover, the experiments only demonstrate the convergence property of the iterations. It would be nice to have more practical experiments on tasks such as multiple instance learning and retrieval. I could improve my evaluation if the authors would address the questions below.
Technical Quality: 3
Clarity: 2
Questions for Authors: 1. Since MHN is a special case of KHN, how do the current results reduce to the MHN results in [Ramsauer et al., 2020]? Are there any improvements?
2. Definition 2.7 assumes that the patterns are normalized. This assumption is not explicitly stated elsewhere and are not well-justified.
3. Definition 2.8 is more like a statement rather than a definition. Why should the definition lead to the conclusion that "there is a feature map $\Phi$ such that $\Phi(V)$ is a memory code"?
4. Is the convergence in Theorem 2.2 pointwise convergence or uniform convergence? For a given $\tau>0,$ what's the error rate?
5. The proposed algorithm learns a linear map, what about non-linear maps?
6. In line 597, since you are referring to a known upper bound in a book, it would be better to provide more details for the reader to find the result.
There are some typos:
Line 48: does -> is
Line 105: lower bounded by what?
Definition 2.5: It seems that there should be an arg in front of minmax
Definition 2.7: satisfies -> satisfy
Confidence: 5
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: The paper is of theoretical nature. A paragraph on limitation is given in the main content.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > **W1:** Prior work on KHNs already has analysis using linear kernels. It would be helpful to showcase the improvement brought by the current algorithm to improve clarity and contextualization relative to prior work.
**Response:**
The prior work in KHN mainly explains how separation maximization helps capacity but lacks theoretical justification for why their algorithm improves capacity.
To fill this gap, this work provides a rigorous theoretical analysis of why and how separation maximization improves KHN memory capacity. Additionally, we introduce the notion of optimal capacity and prove the asymptotic conditions for achieving it.
We acknowledge that the current draft does not emphasize these points enough. We will update the final version accordingly.
> **W2:** Moreover, the experiments only demonstrate the convergence property of the iterations. It would be nice to have more practical experiments on tasks such as multiple instance learning and retrieval. I could improve my evaluation if the authors would address the questions below.
**Response:** In response to more experiments, **we have added new MIL results and retrieval results** in Table 2 and Figure 3 of the attached pdf.
These additional experiments demonstrate clear improvements over standard MHN.
> **Q1:** Since MHN is a special case of KHN, how do the current results reduce to the MHN results in [Ramsauer et al., 2020]? Are there any improvements?
**Response:**
**The main difference of reducing to MHN is that MHN does not guarantee the well-separation condition will hold for all memories. Thus, not necessarily to store *all* the given points.**
Second, MHN does not have an extra feature space to store memories. If the data dimension is too small, it is possible that the well-separation condition is impossible to achieve.
For KHNs, since learning iteration of Algorithm 1 and $D_\Phi$ has no restrictions, one is able to select sufficiently large values of these two, to ensure the well-separation condition holds for all memories. We can also compare KHNs and MHNs with Lemma 2.1. With sufficiently large $D_\Phi$ and $N$, $R_\Phi$ will eventually surpass $R$, and thus obtain a higher lower bound.
Empirically, we conducted additional exps (in Table 2, Figure 3 of the attached pdf) showing improvements of UHop+ over MHN.
> **Q2:** Definition 2.7 assumes that the patterns are normalized. This assumption is not explicitly stated elsewhere and is not well-justified.
**Response:**
**We would like to highlight that this is a commonly used assumption in the analysis of [1, 2] and in experiments in [1, 2, 3].**
A justification for this setup is its connection to transformer attention. The modern Hopfield update rule/retrieval dynamics are often compared to the attention mechanism, where the input query and memory set correspond to Q and K in attention. Since LayerNorm is commonly used in attention layers, this setup reflects real-world scenarios.
Analyzing non-normalized patterns is left for future work.
Lastly, we have added a highlight block for this assumption in `line 92` of the final version.
> **Q3:** Definition 2.8 is more like a statement rather than a definition. Why should the definition lead to the conclusion that "there is a feature map such that is a memory code"?
**Response:** Thanks for pointing this out. We recognize this potential confusion and have changed the definition to a Lemma, as it is a derived result from Definitions 2.2 and 2.7.
> **Q4:** Is the convergence in Theorem 2.2 pointwise convergence or uniform convergence? For a given what's the error rate?
**Response:**
The main purpose of Theorem 2.2 is to show one optimization problem converges to another. The convergence between two functions here is not our main focus. That being said, the convergence between $\mathcal{L}_0$ and HardMax loss is uniform convergence as $\tau$ goes to 0. We have made it clear in our manuscript by adding the following in line 589.
>> Given a $\tau > 0$, for the error rate $\epsilon$ between the convergence of $\mathcal{L}_0$ and HardMax Loss, we have $ \epsilon \leq 2 \tau \log(M)$.
> **Q5:** The proposed algorithm learns a linear map, what about non-linear maps?
**Response:**
Yes, Algorithm 1 also works for non-linear maps.
We would like to highlight that only linear feature maps have been discovered for KHNs. The explicit form of non-linear maps is not yet known. The desired (retrieval dynamics with) non-linear map should satisfy fixed-point convergence and monotonic energy minimization to be well-defined under the KHN framework. Discovering its explicit form is left for future work.
> **Q6:** In line 597, since you are referring to a known upper bound in a book, it would be better to provide more details for the reader to find the result.
**Response:** Thank you for pointing this out. We have modified the citation to [Thm 1, Moore 1974].
> **Q7:** There are some typos: Line 48: does -> is Line 105: lower bounded by what? Definition 2.5: It seems that there should be an arg in front of minmax Definition 2.7: satisfies -> satisfy
**Response:**
Thank you very much for pointing these out!
For Line 105, we change it to: “lower bounded by the following lemma:”
For other typos, we have conducted another round of proofreading and fixed all typos.
===
* [1] Uniform Memory Retrieval with Larger Capacity for Modern Hopfield Models. ICLR 2024
* [2] Sparse and Structured Hopfield Networks. ICLR 2024
* [3] Universal Hopfield Networks: A General Framework for Single-Shot Associative Memory Models
* [4] Trivializations for gradient-based optimization on manifolds. NIPS 2019
* [5] Vector Packing in Finite Dimensional Vector Spaces. 1974
* [6] On kissing numbers and spherical codes in high dimensions. 2018
* [7] New lower bounds on kissing numbers and spherical codes in high dimensions. 2023
---
We hope these responses have addressed your concerns. Thank you!
---
Rebuttal Comment 1.1:
Comment: Thank you for the detailed rebuttal.
- In Fig. 3 of the attached pdf. It seems the results for "Sparse Hopfield + U-hop+" is missing.
-
> "We recognize this potential confusion and have changed the definition to a Lemma, as it is a derived result from Definitions 2.2 and 2.7."
Could you elaborate on how it is derived from Definitions 2.2 and 2.7?
- I don't think my question Q4 is addressed.
---
Rebuttal 2:
Comment: Thank you very much for the response.
> **Q1:** In Fig. 3 of the attached...
**Response:** It is almost overlapped with the Generalized Sparse Hopfield + U-hop+ line.
> **Q2:** Could you elaborate on how it is derived from Definitions 2.2 and 2.7?
**Response:** We apologize for the confusion. The logic behind this is that Def 2.2 leads to Def 2.7, and then Def 2.7 leads to the maximal capacity as we refer to the solution of the $\max_{ \Phi(V) } | \Phi(V) |$, and $\tilde{\Phi}$ is the solution of $\argmax_{ \Phi } \max_{ \Phi(V) } | \Phi(V) |$.
> **Q3:** I don't think my question Q4 is addressed.
**Response:** With the same $\Xi$ and $\Phi$, $\mathcal{L}_0$ converges uniformly to the HardMax loss with same error rate of $\epsilon = O(\tau \log M)$. However, since Thm 2.2 considers two optimization problems, it is indeed $\Gamma$-convergence between these two minimization problems (convergence for functionals). With the property of $\Gamma$-convergence, the minimizers of one minimization problem will also converges to another. We will address that in the proof of our manuscript.
____
Thanks again for the response and your insightful review, we hope our responses address your concerns.
---
Rebuttal Comment 2.1:
Comment: Thank you for the response. However, I think there are still some misunderstandings.
Q1: Could you provide any intuition on why the Sparse Hopfield + U-hop+ line is almost overlapped with the Generalized Sparse Hopfield + U-hop+ line?
Q2: In my previous question, I was wondering why Definition/Lemma 2.8 leads to the conclusion that "there is a feature map such that is a memory code". It seems to me that there are some important premises for Memory Code (Definition 2.7). One is the assumption that the patterns are normalized. Another is that the feature map $\Phi\in \mathcal{H}$ is linear.
---
Rebuttal 3:
Comment: Glady!
> **Q1:** Could you provide any intuition on why the Sparse Hopfield + U-hop+ line is almost overlapped with the Generalized Sparse Hopfield + U-hop+ line?
**Response:** Sparse Hopfield is a special case of Generalized Sparse Hopfield (GSH) with the sparsity parameter $\alpha=2$. Notably, $\alpha=2$ represents the most sparse GSH. Therefore, for datasets with high sparsity, the two may overlap.
> **Q2:** In my previous question, I was wondering why Definition/Lemma 2.8 leads to the conclusion that "there is a feature map such that is a memory code". It seems to me that there are some important premises for Memory Code (Definition 2.7). One is the assumption that the patterns are normalized. Another is that the feature map is linear.
**Response:** Thanks for pointing this out. We notice our expression causes confusion. We have changed the paragraph into the following:
"From this definition, with a fixed $D_\Phi$, for some set of patterns $\mathcal{V}$ that has size $M^*$, there might exist a function $\tilde{\Phi} : \mathbb{R}^d \rightarrow \mathbb{R}^{D_\Phi}$, such that it pushes memories further from each other to the level where $\tilde{\Phi} ( \mathcal{V} )$ is a memory code.
In other words, if one hopes to store all points in $\mathcal{V}$ with KHNs, the objective is to find an appropriate $\tilde{\Phi}$."
**This paragraph is meant to be an intuitive explanation rather than a theoretical result. Thank you again for pointing this out, we have revised our manuscript accordingly.**
---
Rebuttal Comment 3.1:
Title: A Gentle Reminder
Comment: Dear Reviewer,
As the discussion period coming to its end, we want to check if our latest response has addressed your concerns.
We have responded to the reviewer’s followup questions and concerns. If resolved, we respectfully ask that you consider increasing your score to reflect your satisfaction.
Please let us know if you have any further questions or need clarification. Thank you!
Best regards,
Authors | Rebuttal 1:
Rebuttal: ## General Response/Rebuttal Summary
Dear Reviewers,
We thank the reviewers for the insightful questions and reviews.
We have answered all the questions and addressed all the problems in detail in rebuttal and revision.
In response to the reviewers' suggestions, these revisions include additional explanations, refined definitions, paragraphs, and tables to help readers understand this paper. Most importantly, 3 new experimental studies have been added to further clarify U-Hop+'s superiority, including **Multiple Instance Learning (MIL)**, **Memory Retrieval Task** and **Loss Curve w.r.t. Different Memory Size.**
---
### **Revision Details**
**Major Revisions Include:**
* **3 Additional Experiments with Uniformly Positive Results:** [`UbwM`,`U6Ut`, `LSVx`,`vthX`]
* **Multiple Instance Learning (MIL)**: We compare the performance of modern Hopfield and Sparse modern Hopfield with and without U-Hop+. The results show U-Hop+ constantly improves models performance in MIL tasks. [`Table 2 of Attached PDF` ]
* **Memory Retrieval Task**: We conduct memory retrieval experiments with respect to memory size change and different noise perturbation on queries. We use MNIST and CIFAR10 as datasets. The results show with U-Hop+, retrieval dynamics enjoy lower retrieval error. [`Figure 3 of Attached PDF` ]
* **Loss Curve w.r.t. Different Memory Size:** We plot the loss curve of Algorithm 1 on MNIST, to demonstrate the sub-linear time convergence of $\mathcal{L}_0$. [`Figure 4 of Attached PDF`]
* **Clarify Assumption of Normalized Patterns** [`UbwM, LSVx`]
* We specify the “patterns are normalized” as a separated definition in `line 92`
* **Refined Definition 2.7:** [`UbwM`,`U6Ut`,`vthX`]
* For clarity, we divide the original Def. 2.7 into 2 separated definitions: A & B
* A: Well-separation condition (A condition for KHMs to storage a given memory pattern)
* B: Memory Code (spherical codes with all points satisfy the well-separation condition)
* **A Paragraph of Comparison with Prior Works (UHop or KHN)** [`U6Ut`]
* Prior KHN lacks theoretical justification for how separation maximization helps capacity
* To fill this gap, this work provides a rigorous theoretical analysis of why and how separation maximization improves KHN memory capacity
* Additionally, we introduce the notion of optimal capacity and prove the asymptotic conditions for achieving it.
* **Change Definition 2.8 to Proposition 2.1** [`UbwM`]
* We modify Definition 2.8 into a Proposition since it is a derived result from the previous definition and theorem.
**Minor Revisions Include:**
* Proofread the manuscript and fixed all identified typos and grammatical errors by reviewers and authors.
* Added explanation for the upper bound used in `line 597`
* Added explanation for the uniform convergence between $\mathcal{L}_0$ and the HardMax loss.
* Changed `line 590` from $1/2\beta M$ to $1/ (\beta(M-1)$
We hope these revisions address the reviewers' concerns and improve the overall quality of our paper.
Thank you again for your review!
Best regards,
Authors
Pdf: /pdf/1d622ab4c7dca950498554c38ad38332b8c869f8.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
AV-GS: Learning Material and Geometry Aware Priors for Novel View Acoustic Synthesis | Accept (poster) | Summary: This paper proposes a novel view acoustic synthesis approach based on 3D Gaussian Splatting scene representation. The framework consists of a 3D GS model, acoustics field network, and audio binauralizer. It first trains a 3D GS model to capture scene geometry. Then they use attributes from each Gaussian to initialize a learnable acoustic point representation, and then average the feature obtained from nearby Gaussian points for acoustic modeling.
Through the experiments on the RWAVS and Soundspaces dataset, the authors demonstrate that the proposed method achieves state-of-the-art performance. They also perform the ablation study to ablate some components to demonstrate their technical advance.
Strengths: - The paper brings up a clever way to leverage point-based scene representation from 3DGS for acoustic modeling and the results are promising.
- The authors perform thorough experiments and ablations to demonstrate its technical advance. The proposed method outperforms baselines on the quantitative evaluation.
Weaknesses: - The reviewer thinks there are several issues with the writing:
- First, **the method part isn't clearly stated**. It takes a long time for the reviewer to fully understand the method. The method is simple, using pretrained 3DGS to initialize a learnable acoustic point representation. While the method figure doesn't quite help with understanding.
- The reviewer strongly feels the paper is **overclaimed**. It claims it learns "holistic geometry-aware material-aware scene representation" while the experiments are only for acoustic modeling results. The author needs to use experiments to support this claim. One thought the reviewer has to visualize the PCA feature of learnable acoustic points to see if it has some cues. Otherwise, the reviewer is only convinced that this paper proposed a better acoustic modeling approach that leverages the GS features.
- The major experimental **results for the baselines are directly brought from AV-NeRF and INRAS papers** while the authors don't mention this in the paper. Also, the authors need to confirm if they have guaranteed the same training and test split and evaluation code.
- The reviewer has a question on the Position-guidance G condition. The authors use unit vectors which get rid of distance information. The reviewer is confused as to why remove the distance. Would it be an important cue?
- In the paper [5] Real acoustic field, the authors found out that energy decay loss shows a powerful improvement on those energy-based metrics. Did authors consider using this loss to improve?
Technical Quality: 3
Clarity: 2
Questions for Authors: I have mentioned my concerns and questions in the weaknesses part and I hope the authors could address them during the rebuttal.
A suggestion to the authors for paper writing: save the figures as PDF instead of screenshots or png.
Confidence: 5
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: Yes, the authors adequately addressed the limitations
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ## Weakness
**W1: Method part isn't clearly stated**
We will re-draw Figure 2 (Overview of our proposed AV-GS) to make it more clear and straightforward. And update all figures to PDF.
**W2: PCA feature of learnable acoustic points**
(The PCA plot and the correlation matrix discussed below have been provided in the PDF attached above, in the common "Author Rebuttal")
For RWAVS scene 6, we analyze alpha by plotting its first two principal components and applying k-means (k=3) clustering, to project colors onto G_a. It can be observed that most of the objects are grouped by the same color. Next, we manually match scene objects (e.g., fireplace, walls) with material labels (e.g., 'brickwork', 'solid wall' respectively) from an absorption database [6]. Using these absorption coefficients, we assign absorption vectors (Dim: [1x6]; coefficients for 6 frequencies from [6]) to every point that corresponds to the object (matched with the label). A correlation matrix of these absorption coefficients, across the points (grouped by clusters), shows high self-correlation within clusters and lower cross-correlation across clusters, i.e., the points within the same clusters have similar absorption values, and less correlated absorption across the clusters. While these clusters do not explicitly highlight fine-grained material properties such as Young's modulus, they provide implicit material cues for binauralization.
Additionally, we demonstrate the effect if alpha is frozen during our training (initialized using SH from vanilla 3D-GS). As shown in the table below, constraining alpha to SH results in a significant drop in binarualization.
| Alpha- trainable | MAG (↓) | ENV (↓) |
|------------------|---------|---------|
| No | 1.584 | 0.147 |
| Yes | 1.417 | 0.140 |
Combined with the PCA/correlation matrix above, and this experiment, we hypothesize that the learned alpha is able to pick up implicit material properties on top of SH to aid binauralization. We agree that alpha captures an implicit, rather than explicit properties like Young's modulus, material representation derived from raw RGB images and will clarify this in the revised version.
**W3: confirm if same train-test split and evaluation code as baseline**
Thank you for pointing this out. We adopt the same training and test split and the evaluation code provided by AV-NeRF (available on Github) and hence making the comparison fair.
Please note AV-NeRF provides train and test json files, and we adopt the same dataloader from their code, which makes the comparison fair.
**W4: Position-guidance. Isn't distance an important cue?**
Great question! We found using only the view information for position guidance suffices for the model performance. We found that this normalization in Eq. (3) provides numerical stability during training.
With regards to the distance being an important que, please note that the position embedding of the listener location is already a part of the binauralizer, please check the Fig. 7 in A.2
We skip the arrow (showing X_L/X_S into the binauralizer) in the Fig 2, to maintain brevity and not deviate focus of the novel contributions. Instead we tried to highlight this in our Binauralizer Fig. 7 in A.2. We will update this in the revised version.
**W5: Does energy decay loss help?**
Thank you for pointing this out. We adopted the decay loss in addition to our proposed losses, however we did not find any improvement. (please note, for the RWAVS dataset, we compute the l1 loss between the decay curves for the GT and the predicted spectrograms of the binaural audio).
Following RAF[7] we also ablated the weight of the decay loss in the overall loss computation. However, in contrast to RAF's findings where decay loss improves temporal domain metrics, but worsens temporal-frequency domain metrics (STFT error), we find that it worsens both temporal (Envelope distance (ENV)) and temporal-frequency domain metrics (spectrogram magnitude (MAG).
| Lambda (decay weight) | MAG (↓) | ENV (↓) |
|-----------------------|---------|---------|
| 0.0 | 1.417 | 0.140 |
| 1.0 | 1.481 | 0.142 |
| 2.0 | 1.510 | 0.142 |
| 3.0 | 1.486 | 0.141 |
References:
[1] Gao, Ruohan, and Kristen Grauman. "2.5 d visual sound." CVPR'19
[2] Luo, Andrew, et al. "Learning neural acoustic fields" NeurIPS'22
[3] Su, Kun, Mingfei Chen, and Eli Shlizerman. "INRAS" NeurIPS'22
[4] Liang, Susan, et al. "AV-NeRF." NeurIPS'23
[6] C. Kling. Absorption coefficient database, Jul 2018.
[7] Chen, Ziyang, et al. "Real acoustic fields" CVPR'24
[8] Tang, Zhenyu, et al. "GWA" SIGGRAPH'22
---
Rebuttal Comment 1.1:
Comment: Thank you for the response. The PCA results show that the model learns material-related properties, and the explanation of distance information clarifies my concerns. I’ll raise my score to ‘weak accept’ and hope these changes are reflected in the revised version.
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer for recommending acceptance. We appreciate this discussion and will incorporate it in our final version. | Summary: This paper proposes a new audio-visual Gaussian Splatting model for novel view acoustic synthesis. The proposed method explicitly models scene geometry and learns a point-based scene representation with an audio guidance parameter on locally initialized Gaussian points that takes the space relation from the listener and sound source into consideration. This work is the first that applies gaussian splatting to the novel view acoustic synthesis problem, and experiments on two datasets demonstrate the effectiness of the proposed method compared to prior work.
Strengths: - The idea makes sense, and I was expecting gaussian splatting to be applied to this problem by somebody sooner or later, and this paper is indeed doing that.
- Generally, the paper makes good attempts to apply Gaussian Splatting to this recently proposed task, a direct extension on how it is used for visual rendering. The method is clearly formulated and defined with informative notations.
- Experiments are conducted on two datasets following prior work AV-NeRF [17], and it shows noticeable gains compared to prior baselines.
- Many alation studies are performed including ablation on the size of vicinity w.r.t the listener and sound source position, ablation on the choice of physical parameters, which is extensive.
Weaknesses: - There is a strong claim that the proposed model is learning material-aware priors, jointly modeling 3D geometry and material characteristics, holistic scene geometry and material information, etc. Then, I would actually expect materials are explicitly modeled, such as acoustic properties of surfaces in the form of some material parameters (e.g., absorption coefficients, Young’s modulus, etc.). However, all that is modeled is just a parameter alpha that claims to encapsulate material-specific characteristics of the scene. This is somewhat misleading as there is nothing constrains alpha to be physics-based or to be correlated with some material parameters. It's likely that it’s just learning some scene representation that may be more helpful for acoustic rendering.
- Also the so-called acoustic field network is also nothing about audio by itself. By acoustic field network, I was thinking the method is explicitly modeling sound propagation by taking into account the material properties of surfaces and the geometry of the space. The audio binauralizer is basically following prior work AV-NeRF [17], and what changes is what the audio binauralizer conditions on. In AV-NeRF, it’s scene features from NeRF and in this work it’s the context scene features from gaussian splatting.
- There were no qualitative examples given in supplementary materials, so it's really hard to appreciate the improvement that this method leads to.
- There are also many typos or grammar mistakes here and there. For example:
1. L45, 3D Gaussian splatting-based *methods*?
2. Related work, mixed use of past tense and present tense.
3. L125, a brief ??
Technical Quality: 3
Clarity: 2
Questions for Authors: - See questions in the weakness section above.
- Also, for the SoundSpaces synthetic dataset, why it only contains 6 indoor scenes. There are more scenes in the dataset, and it would be good to explain the reason why only these six are not used.
- Is there any way to interpret the learned audio-guidance parameter to show that it’s material related?
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: Yes some limitations are discussed in the end, though some might be missing as discussed above.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ## Weakness
**W1: Explicit modeling of alpha**
We appreciate the reviewer's insight regarding the explicit modeling of alpha.
*"there is nothing constraining alpha to be physics-based or correlated with material parameters"*
While we acknowledge the importance of deriving alpha from material properties input, we would like to clarify that AV-GS is motivated by the latest trend towards scalability and adaptability for real-world scenes like RWAVS[4] and Real acoustic field[7], where only RGB frames are available – a more practical and least constrained setting. In contrast, other works [5] [8] using absorption coefficients rely on synthetic scenes and pre-existing absorption databases, matching object labels with material descriptions which are available for synthetic scenes but typically unavailable in real world scenarios as considered in our work.
Please consider that currently, anyone with a phone and a binaural microphone can capture all input data that is required to train an AV-GS model (~4 min of recording for a single room scene) - something not possible if explicit material properties are required for model training.
We agree that alpha captures an implicit, rather than explicit, material representation derived from raw RGB images and will clarify this in the revised version.
(Also, please see Q2 below, where we demonstrate that alpha, although implicitly learned, is related to material properties.)
**W2: Explicit modeling of acoustic field network**
Explicit sound propagation modeling is indeed valuable, but our acoustic field network design is intentionally non-trivial. We learn acoustic properties on 3D-GS points, accumulating points within an empirically validated vicinity of the listener and source. We fuse implicit material (alpha) with learned position guidance, using point view directions relative to listener/source. This is further facilitated by audio-aware adaptive density control to priortise point density across texture-less regions -something vanilla 3D-GS lacks, but is important for NVAS. We agree explicit beam-tracing within audio-guided 3D-GS is an interesting future research direction.
Regarding Binauralizer, as mentioned in L166, it is not a part of our contribution. It was motivated in [1] and adopted by AV-NeRF[4]. We further improve it by deriving holistic conditions from 3D scenes, specifically- learning implicit material properties, fusing with position guidance, and adaptively densifying the point-based representation w.r.t audio reconstruction loss.
**W3: Qualitative audio samples**
Because of rebuttal instructions regarding not sharing links, we have provided the AC with an anonymized link to the audio samples. Apologies for the inconvenience. We will also make our code repo and all rendered samples publicly available post the review period.
**W4: Typos**
Apologies! we will address all the typos in our revised version.
## Question
**Q1: only 6 scenes from Soundspaces?**
We chose six indoor scenes from the SoundSpaces dataset to ensure consistency and valid fair comparisons with previous works, specifically NAF (NeurIPS'22), INRAS(NeurIPS'22), and AV-NeRF(NeurIPS'23), which established benchmarks using these six scenes. They represent a diverse range of environments: (1) Two single rooms with rectangular walls, (2) Two with non-rectangular walls, and (3) Two multi-room layouts.
Currently both RWAVS and soundspaces-synthetic are publicly available, and hence used for our fair and consistent validation. We can incorporate this into our revised version with similar data (binaural audio and RGB pairs) for other scenes.
**Q2: Interpretability of audio-guidance parameter**
(The PCA plot and the correlation matrix discussed below have been provided in the PDF attached above, in the common "Author Rebuttal")
For RWAVS scene 6, we analyze alpha by plotting its first two principal components and applying k-means (k=3) clustering, to project colors onto G_a. It can be observed that most of the objects are grouped by the same color (3 clusters - orange, green, blue). Further, we also manually match scene objects (e.g., fireplace, walls) with material labels (e.g., 'brickwork', 'solid wall' respectively) from an absorption database [6]. Using these absorption coefficients, we assign absorption vectors (Dim: [1x6]; coefficients for 6 frequencies from [6]) to every point that corresponds to the scene object matched with the material label from [6]. A correlation matrix of these absorption coefficients, across the points (grouped by clusters), shows high self-correlation within clusters and lower cross-correlation across clusters, i.e., the points within the same clusters have similar absorption values, and less correlated absorption across the clusters. While these clusters do not explicitly highlight fine-grained material properties such as Young's modulus, they provide implicit material cues for binauralization.
Additionally, we demonstrate the effect if alpha is frozen during our training (initialized using SH from vanilla 3D-GS). As shown in the table below, constraining alpha to SH results in a significant drop in binarualization.
| Alpha- trainable | MAG (↓) | ENV (↓) |
|------------------|---------|---------|
| No | 1.584 | 0.147 |
| Yes | 1.417 | 0.140 |
Combined with the PCA/correlation matrix above, and this experiment, we hypothesize that the learned alpha is able to pick up implicit material properties on top of SH to aid binauralization.
References:
[1] Gao, Ruohan. "2.5 d visual sound." CVPR’19
[4] Liang, Susan, et al. "AV-NeRF." NeurIPS’23
[5] Ratnarajah, Anton "Listen2Scene" IEEE VR’24
[6] C. Kling. Absorption coefficient database, Jul 2018.
[7] Chen, Ziyang, et al. "Real acoustic fields" CVPR’24
[8] Tang, Zhenyu, et al. "GWA" SIGGRAPH’22.
---
Rebuttal Comment 1.1:
Comment: Thanks for the rebuttal and additional results, which helps address many of my concerns. The new PCA plot is especially helpful, and I would encourage the authors to include it in the main paper to demonstrate the correlation of alpha with real material properties. The authors are suggested to revisit this strong claim in the paper "the proposed model is learning material-aware priors", and make appropriate adjustments and provide supporting resutls where needed.
Regarding the anonymized link to the audio samples only shared with AC, I kindly request the AC to confirm and double check the audio results are meaningful. If that is confirmed, I am happy to raise my rating and fine with the paper getting accepted considering the interesting idea and task proposed in the paper. The qualitative audio samples are also important to be included as part of the paper to help readers appreciate the task and the gains of the proposed method.
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer for the discussion and for increasing the score towards acceptance. We will update the final version with these details and mention the learned implicit material properties. | Summary: This work proposes an efficient and performant method for novel view acoustic synthesis using Gaussian Splatting representations of 3D scenes. The proposed model integrates geometry and material conditioning, and offers a way to bridge the gap between visually-useful 3DGS representations and acoustically useful ones, using “audio-guidance” which considers the local environment in Experiments show this outperforms existing NeRF-based approaches while being more efficient.
Strengths: Overall, I think this work makes a very useful contribution. Bridging Gaussian Splatting representations with audio rendering is likely to be of significant interest to the community, and the quantitative results and efficiency seem strong. The experiments and comparisons are helpful, and give me confidence that these results are quite robust.
Weaknesses: The value of $\tau_g$ seems somewhat arbitrarily defined, and the range of tested vicinity sizes is somewhat narrow; given the efficiency of the method. It’s hard to extrapolate what the performance would be with more context (i.e. would it keep going down?). Most of my other suggestions are relatively minor, about the writing and presentation:
- Abstract: “at a 3D scene” -> for clarity, should this be “within” a 3D scene?
- Introduction: “visual rendering of 3D scenes without spatial audio (i.e., deaf)” -> this seems like an inaccurate characterization. Deafness is not about rendering, but about sensation. Spatiality is also not clearly related to this.
- Introduction: “realistic binaural audio synthesis is challenging, since the wavelengths of sound waves are much longer” -> this is not the only reason that this is challenging. Even if you use a simplified (e.g. ray-tracing) model, which would be inaccurate at lower frequencies due to wave-like behavior etc., you would still need a good model of the environment for physics-based rendering, or a large number of spatial measurements for a data-driven approach.
- Introduction: “…modeling 3D geometry and material characteristics of the visual scene objects to instigate direction and distance awareness…” -> while geometry has a direct impact on “direction and distance”, material impacts spatial perception indirectly (through absorption, reflection, and diffusion). Could this be rephrased more clearly?
- Related work: “Anton and Dinesh” should maybe be “Ratnarajah and Manocha”, i.e. last names of authors?
- Related work: Under “Geometry and material conditioning”, the final statement differentiates the current work from prior work by pointing out the non-reliance on scene and geometry inputs, but doesn’t clarify why this is useful. E.g. it could be pointed out that, in the real world, such inputs are often not available, so this work allows generalizing to such settings.
- Figure 2: “Differential Rasterizer” -> “Differentiable Rasterizer”?
- Figure 4: Could there be a subfigure showing 15%, since that seems to give the best overall performance? It would be helpful context, since otherwise the reader needs to interpolate between (c) and (d).
- Section 4.4: “intuitionally” -> “intuitively”?
Technical Quality: 3
Clarity: 2
Questions for Authors: - Is $S$ modeled as an omni-directional point source? I assume so because point source is the typical assumption, and an emission direction isn’t given (vs. listener direction d), but it’s worth clarifying.
- I think the notation here could be clearer. For example, $V_C$ and $V_A$ don’t seem used, and it’s not clear to me what their role is? E.g. why is the observation $O_p$ = ($V_C$, $V_A$) instead of ($I$, $a_{bi}$)? If the pose $p$ captures the position and orientation, what do $V_C$ and $V_A$ add on top of this representation? Why is $I$ upper-case and $a$ lower-case here, when they both represent observations? Why is $a_{bi} = \{a_l, a_r\}$ notated as a set, but other sets are notated as tuples? Additionally, shouldn’t $k_l$ be $k_L$, similar for $S$?
- Conceptually, there is a tension between considering the global scene context, as the paper argues for, and dropping points outside the local vicinity of the source and listener (since sound reflections can easily include points from other regions, in principle). Could the authors comment on this? It’s interesting that the optimal vicinity size is not the largest one tested. Is there some intuition for why this might be the case, given that it offers more information? Currently, the reasoning seems to be that this contains unnecessary information, but the useful information seems to be a subset of this?
- The audio rendering task is cast as a binauralization task. I understand that this is practically necessary, i.e. the input is mono and output is stereo binaural. However, wouldn’t many of the problems in this paper still exist in the monaural listener context as well? E.g. compared with a binauralization task purely based on HRTFs or simplified room geometries (e.g. Richard et al., ICLR 2021), where the context $C$ is not as rich. Could this be clarified?
- How was the $\tau_g$ value decided?
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: I think the authors have fairly represented the limitations. The broader impact statement is a little bit less clear to me, i.e. I’m not sure I understand the surveillance applications. Could this be clarified? I would be interested for the authors to consider the potential impact of the limitations listed. For example, the limitation of rendering larger scenes; what practical negative impacts could this have in deployment scenarios?
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ## Weakness
**W1: Value of T_g**
T_g is inspired by vanilla 3D-GS paper (which empirically found a threshold of 0.0002) and consistently our threshold of 0.0004 was determined through empirical testing.
In 3D-GS optimization (trained for NVS), the point's gradient is compared with the T_g threshold, and is decided whether the point should be cloned/split or not. Particularly in this case the gradients correlate to ability of the points to contribute to the visual geometry of the scene when they are splatted in a particular camera-view direction. In our NVAS case, the gradient of the points correlate to the contribution of the point in providing an acoustic guidance when generating binaural audio for a given {X_L, X_S} pair. Please note NVAS is not limited to only the camera-view direction, but rather vicinity of the listener and sound source.
Our empirical choice is in-line with 3D-GS, and also intuitively higher -because we adopt a dual stage training, wherein the rough geometry is already provided to 2nd stage (Structure-from-Motion points are provided to the first stage), and comparatively considered for a much different sample of points (volumetric camera view Vs listener and source vicinity). A higher value helps to save compute cost.
Given a reasonable memory budget, ideally one would want to empirically decide on the threshold for every different vicinity size.
**Minor comments** We sincerely appreciate the discussion regarding general acoustics, we will incorporate this in our next version with the 15% vicinity plot. We will relate our Intro section to the wave-based (good for lower frequency bands --> resulting in diffusion, scattering .. i.e. wave properties) and Geometric acoustic-based methods (good for higher frequencies --> behaves like rays --> ideally specular reflections .. i.e. ray properties).
## Question
**Q1: S omni-directional?**
Right, the sound source is omni-directional, we adopt the same dataset as AV-NeRF [4]. We will mention this explicitly in our revised version.
To extend AV-GS for directional sources, directional components can be added (perhaps weighing the vicinity sphere in the direction).
**Q2: Notations**
Sincere apologies for the confusion.
V_C and V_A: We tried to use V to represent generic views; V_C - visual and V_A - audio modality.
V_C = \{I | p\}; V_A = \{a_{bi} | p, a_{mono}\}
For the current scope of input used by AV-GS, you are right in saying, V_C = I and V_A = a_bi. However, in a more generalized NVAS setting where one may supplement the RGB with a depth view, in which case V_C = I+D (RGB + Depth).
I and a: As mentioned these are both observations, however, 'I' is a 3D matrix (RGB) hence, uppercase. 'a' is the 1D audio (time series sequence) and hence lower case. We will update this.
Tuple or set: Apologies, all should be notated as tuples, and k_L and k_S.
**Q3: Dropping points outside the vicinity …, optimal vicinity is not the largest one tested. Why?**
Again, Great question! Although it might intuitively feel enlarging the vicinity size would help, however please note in the context of the GS learned point based representation, we are typically looking at a scale like ~400k points for a single room scene. Increasing the vicinity would account for adding this huge number of parameters which would make it harder for the field network to be burdened - It is a trade-off. We did try voxelizing the representation, so that we can deal with lesser points (and in turn lesser alpha's) despite increasing the vicinity, but it worsened the performance -we hypothesize this is because the alpha parameter for the points post-voxelization cannot be computed with a simple average (or even a MLP, like anchor points - Scaffold GS [9]). A specialized attention based selection of points for the vicinity might be an interesting future direction to explore, but please note not all points closer to the source or listener are actually (more) helpful.
**Q4: Wouldn't many of the problems in this paper still exist in the monaural listener context as well?**
It is correct in noting that many of the challenges in this paper are relevant to monaural contexts as well.
The primary contribution of AV-GS lies in capturing a holistic context of the entire scene and generating context within the 3D scene. Binauralizer arch. is adopted from AV-NeRF (who partly adopted from [1]), not claimed as a novel bit of this work.
In smaller shoebox rooms (like the one's experimented in Richard et. al ICLR'21, and Office scene in RWAVS), AV-GS's improvement is less pronounced due to limited room geometry impact.
But, AV-GS shows significant improvements in multi-room more complex scenes (e.g., "house" and "apartment" in RWAVS) with wall occlusions, where AV-NeRF's AV-Mapper is less efficient. (scores in Table 1 of paper)
Multi-room scenes give rise to multiple wall occlusions - where AV-NeRF's AV-mapper approach is inefficient.
**Q5: Value of T_g**
Please check W1 above.
**Limitations: Broader impact unclear**
Thanks, we will clarify. Once an AV-GS has been trained to accurately model room geometry and material properties, synthesizing highly realistic binaural audios in real time should be possible. This could be used to create misleading audio experiences, such as simulating the presence of people or activities that are not actually occurring, which could be used to deceive or manipulate individuals.
With regards to rendering large scenes, we agree, that generalization to multiple room (4 or 5, or a multi-storey house) is indeed challenging. Additionally, the problem with adopting an explicit points based representation is that the number of points increases drastically for larger scenes, where although capturing fine-grained details is not necessary (as in the case of NVS), a rough geometry is still a crucial requirement for points to be able to guide NVAS.
References:
[9] Lu, Tao, et al. "Scaffold-gs" CVPR'24
---
Rebuttal Comment 1.1:
Comment: Thank you, I think these are helpful clarifications and I appreciate your detailed response. I continue to enthusiastically recommend acceptance.
---
Reply to Comment 1.1.1:
Comment: We are grateful for the reviewer's encouraging comments. We will incorporate this discussion in the next version. | Summary: The paper proposes a 3D Gaussian splatting (3DGS) based method for improving the acoustic context modeling for novel view acoustic synthesis (NVAS). The proposed model outperforms multiple existing methods on both simulated and realworld data across different metrics. The method also trains and infers faster than the state-of-the-art AV-Nerf method. The paper also provides model ablations and analyses that help in better understanding the role of different model components.
Strengths: 1. The idea of using 3D Gaussian Splatting for better modeling of finegrained geometry and material aware acoustic context for improving the NVAS performance is interesting, novel and works well in practice.
2. The paper reports strong results, where the proposed model outperforms existing methods on both simulated and realworld data
3. The provided model ablations and analyses help in better understanding the contributions of different model components and design choices
Weaknesses: 1. Acoustic conditioning with G_a: the paper "averages the context across all points in G_a" (L156-7). Doesn't this lead to loss of finegrained point-specific information? A learned aggregation strategy (for example, by using the attention mechanism) would probably make more sense here?
2. L192-4, "L_v ... non-overlapping":
i) the insight behind doing this is not clear from the text
ii) there are no references to support these steps
iii) there is no ablation for this loss, as well
3. L208, "The audio guidance ... random initialization": won't random initialization during point expansion lead to local 'discontuity'? Did the authors try warm initialization to preserve local 'continuity' by doing nearest neighbor search or bilinear interporation on the existing alphas?
4. Does the model generalize to more challenging scene datasets like Matterport3D [1]?
5. Audio examples: the paper does not seem to provide audio examples and qualitiatively compare their quality with that of the baselines
6. Minor:
i) L274-5, "4 perspective views ... receiver's position": do these 4 views cover the full 360 degree, otherwise it's not comparable with the field of view of the AVGS model?
ii) Para in L285-90 looks like repeated text
References:
[1] Matterport3D: Learning from RGB-D Data in Indoor Environments. Chang et al. 3DV 2017.
Technical Quality: 3
Clarity: 3
Questions for Authors: Could the rebuttal comment (more) on the following?
1. how the chosen strategy for aggregating the acoustic context in G_a affects model performance. See weakness 1 for details.
2. the rationale behind using L_m, and how the model performs without the loss. See weakness 2 for details.
3. initialization of alphas during point densification in G_a. See weakness 3 for details.
4. generalization of the model to more challenging datasets like Matterport3D. See weakness 4 for details.
5. my minor comments. See weakness 6 for details.
Also, could the authors anonymously provide a few audio examples from the model and baselines?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The paper mentions limitations in Discussion but I could not find any discussion on societal impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ## Questions (and weaknesses)
**W1, Q1: how the chosen strategy for aggregating the acoustic context in G_a affects model performance.**
L156-7: "We obtain the condition for binauralization by averaging the context across all points in G_a, post dropping points outside the vicinity of the listener .."
In every iteration a random listener location is sampled.
Please note from Eq. (3) that the formed context is already conditioned on the direction (position-guidance), which is already an explicit attention.
The context averaging step happens only for the points within the vicinity (which is a smaller subset of all points that are representing the whole scene, ~400k).
We did try to learn an attention-based weighing among the vicinity points; however we often found that the weight predictor collapses terminating the optimization under a poor local minima (i.e. all points are assigned the same weight, 0.5 for a sigmoidal activated weight predictor) due to the sufficiently varied points provided in every iteration (the vicinity keeps changing drastically across every iteration depending on the sample listener location). One might think to assign weights to all points in the scene instead of only the ones within vicinity, however this is very compute intensive for a single iteration.
**W2, Q2: the rationale behind using L_v, and how the model performs without the loss.**
Apologies for not being clear. Our intuition behind introducing alpha is to help us determine each point's contribution to the context used by the binauralizer for generating binaural audio (for a particular listener location). The regularization term encourages lower alpha to prevent over-reliance on a few points, and make the contribution of points (for forming the context) well-distributed, promoting a diverse and representative context which is key for generalization to unseen listener locations.
[10] proposed a similar regularization term however their purpose is significantly different from ours, where they use it for volume minimization to curb 'visual overlap' and enable faster raymarching.
We empirically show below that for AV-GS generally lower values of \lambda_a (contribution of regularization loss, Eq. (6)) are preferred.
| lambda_a | MAG (↓) | ENV (↓) |
|----------|---------|---------|
| 0.0 | 1.43 | 0.140 |
| 0.01 | 1.417 | 0.140 |
| 0.1 | 1.440 | 0.140 |
**W3, Q3: initialization of alphas during point densification in G_a.**
That's a great question. We tried warm initialization using both repeat (similar to vanilla 3D-GS and Scaffold-GS) as well as the nearest neighbor (NN). In comparison to random initialization, they perform almost the same. We hypothesize this firstly because the number of points (within the vicinity) that satisfy the gradient condition are much smaller, secondly, the densifying step is carried after intervals of 100 iterations, which gives the optimization (which happens every iteration) sufficient space to curb discontinuity, if any. (Not to forget the outlier removal step, carried after an interval of 3k iterations).
| Alpha initialization | MAG (↓) | ENV (↓) |
|----------------------|---------|---------|
| random | 1.417 | 0.140 |
| repeat | 1.414 | 0.140 |
| NN | 1.419 | 0.140 |
**W4, Q4: generalization of the model to more challenging datasets like Matterport3D.**
We currently validate AV-GS on a real-world RWAVS and a synthetic dataset Soundspaces, which is in line with prior work in the field of novel view acoustic synthesis - NAF (NeurIPS'22), INRAS (NeurIPS'22), AV-NeRF (NeurIPS'23). As for challenges, RWAVS (real world dataset) is more challenging than Matterport3D (synthetic), especially the house and apartment scenes involving multiple real rooms in a single scene. Currently both RWAVS and soundspaces-synthetic are publicly available. We can incorporate this into our revised version with similar data (binaural audio and RGB pairs) for Matterport3D.
**W5: Audio examples**
Because of rebuttal instructions regarding not sharing links, we have provided the AC with an anonymized link to the audio samples. Apologies for the inconvenience. We will also make our code repo and all rendered samples publicly available post the review period.
**W6, Q5: Minor: i) L274-5, do these 4 views cover the full 360 degree? ii) Para in L285-90 - repeated text**
(i) That is right, all 4 perspective views cover the 360 degrees of listener view. In Table 3, despite the 360 view AV-NeRF falls short of binauralization performance due to lack of a combined holistic scene condition, unlike our method. (ii) Got it, we will remove L285-90 it in the update.
**Limitations:**
Sorry for the confusion, the impact was provided in the "Appendix / supplemental material". We will update it.
Impact: Once an AV-GS has been trained to accurately model room geometry and material properties, synthesizing highly realistic binaural audios in real time should be possible. This could be used to create misleading audio experiences, such as simulating the presence of people or activities that are not actually occurring, which could be used to deceive or manipulate individuals.
References:
[10] Lombardi, Stephen, et al. "Mixture of volumetric primitives for efficient neural rendering." ACM ToG
---
Rebuttal 2:
Title: Response to rebuttal
Comment: Thanks for the responses. Could you answer/comment on the following?
1. Lambda_a: what was the value for it in the paper?
2. Audio examples: thanks for the audio sample, but I think the paper needs a lot more qualitative examples, given that it is an audio spatialization paper. A few suggestions from my end: 1) same mono audio played at different locations in the same scene (2-3 scenes of different sizes and layouts should be evaluated) + qualitative comparison with baselines, 2) repeat 1 for different mono audio samples.
I would urge the authors to add all the results tables and analyses provided in the rebuttal, in the next draft.
---
Rebuttal Comment 2.1:
Comment: 1. Our current implementation uses lambda_a as 0.01
2. Indeed. We will add all audio samples -multiple scenes and different mono audio source, on our project page.
We thank the reviewer for their time and recommending our paper for acceptance. We will incorporate all tables and analyses in the final version. | Rebuttal 1:
Rebuttal: This attached PDF contains the PCA plot and the correlation matrix which is referred in rebuttal response for RHwj - Q2 and DaxK - W2 below.
Pdf: /pdf/36fc2f5aba8e767c6818b90eb59bbd91ed2cfeed.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Introspective Planning: Aligning Robots' Uncertainty with Inherent Task Ambiguity | Accept (poster) | Summary: The paper explores introspective planning to enhance robotic task execution using large language models (LLMs). The authors introduce a method for LLMs to form uncertainty-aware plans without fine-tuning while addressing hallucination and task ambiguity. Their approach integrates introspective planning with conformal prediction.
Strengths: The use of introspective planning with LLMs.
The write-up is easy to follow.
Sufficient comparison and ablation experiments.
Weaknesses: NA
Technical Quality: 3
Clarity: 4
Questions for Authors: NA
Confidence: 1
Soundness: 3
Presentation: 4
Contribution: 4
Limitations: The authors addressed the limitations of their proposed work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the positive feedback, acknowledging that our study includes sufficient comparison and ablation experiments. We also thank you for the comment that the write-up is easy to follow. We are looking forward to receiving additional comments and feedback from you during the discussion phase. | Summary: This paper introduces a method that uses introspective planning to guide LLMs in forming uncertainty and ambiguity-aware task plans. The proposed method derives and quantifies the inference uncertainty of LLMs to enhance task planning and conformal prediction. Additionally, a new dataset on safe mobile manipulation is created as part of this work.
Strengths: + Leveraging LLMs to improve robot task planning performance is promising.
+ The concept of introspection is interesting.
+ The created dataset on Safe Mobile Manipulation benchmark could benefit the robotics community.
Weaknesses: - The term "introspection" or "introspective" is not scientifically defined. Additionally, does introspective refer to a robotic agent, the LLM, or the proposed approach?
- The novelty of the paper is unclear, especially compared to [31]. Line 63 states "The fundamental aim of introspective planning is to guide LLMs automatically to reason about task uncertainty and assess the feasibility of candidate actions." [31] also reasons about uncertainty and ambiguity through MCQA. Is [31] introspective?
- The paragraph in Line 135 indicates the significant enhancement is from the introspective planning rationale k_i, and Line 48 indicates that introspective planning provides a tighter statistical guarantee. However, no theoretical or mathematical proofs are provided.
- How does the proposed approach enable the new capability of modeling and addressing task safety? The method of handling safety seems identical to addressing ambiguity through MCQA.
- What robotics simulations or physical robots are used to create the safe mobile manipulator dataset?
Technical Quality: 2
Clarity: 3
Questions for Authors: - Refer to the comments in the Weaknesses section.
Confidence: 5
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: No negative societal impact of the work is perceived.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the thoughtful feedback!
> *The term "introspection" is not scientifically defined. Does introspective refer to a robotic agent, the LLM, or the proposed approach?*
We use the term “introspection” as defined in [19], referring to the human ability to assess internal values and knowledge of abilities to guide domain-level reasoning. In our paper, we extend the usage by analogy to non-human agents—and in particular to language-enabled AI agents—and denote by “introspective planning” a decision-making procedure by which these agents “assess their own confidence regarding task compliance and safety for multiple candidate plans”.
The term “introspective” refers to the planning procedure followed by the agent. In our paper, this planning procedure relies on a large language model, but it is not identical to it. In fact, our planning procedure is introspective regardless of what LLM it uses. The LLM itself (e.g., GPT-4) is not intrinsically introspective but is queried in an introspective manner.
> *What is the novelty compared to [31]? [31] also reasons about uncertainty and ambiguity through MCQA. Is [31] introspective?*
In [31], the LLM *does not in fact reason* about its own uncertainty, that is, it does not form any intelligible logical discourse about it. Instead, MCQA is merely reading out the final-layer activations of the language model, which the system designers (and not the LLM itself) then interpret as log-probability estimates (but these do not readily encode well-calibrated probabilities, hence the need for the conformal prediction calibration phase). In contrast with [31], our approach prompts the LLM to explicitly discuss, in natural language, its own uncertainty regarding the task compliance and safety of candidate options, and the generated discussion is then used to inform the MCQA readout. Our experiments show that this procedure drastically improves the planner's performance across multiple metrics.
We note that while our contributions are briefly alluded to at the start of Section 2 (line 63), they are actually outlined in more detail in the Introduction (lines 36–61). We thank the reviewer for their question, and we will emphasize the central importance of our method, which prompts the LLM using retrieval augmentation to generate explicit reasoning in natural language prior to the MCQA query to refine the uncertainty of the resulting judgment.
> *Line 48 indicates introspective planning provides a tighter statistical guarantee. However, no theoretical or mathematical proofs are provided.*
We clarify that our claim regarding our method’s ability to provide a tighter statistical guarantee is empirical, not theoretical (the latter would require us to make theoretical proofs involving the outputs of LLMs, which would be beyond the scope of our paper).
In particular, we provide a comprehensive benchmark comparison between our proposed introspective–conformal method and various baselines, including the previous state-of-the-art in conformal planning [31]. Our results show significant improvement in balancing task success rate and the robot's decision conservativeness.
Table 1 shows our method's performance with an 85% target success rate, achieving an over-asking rate of about 6%, compared to 51% for the baseline [31]. This means that the baseline [31] is inappropriately asking the user to provide clarification in 1 out of every 2 requests that are in fact unambiguous, while our method does so in just 1 out of every 16.
Similar improvements are evident in Fig. 4g. *For any target success rate between 70% and 99%, our method achieves a significantly lower over-ask rate; and, equivalently, for any acceptable over-ask rate, our method can provide a higher statistical bound on success rate.*
The latter result substantiates our empirical claim: if it is allowable for the robot to ask for unnecessary clarification in only 1 in every 2 requests, then the strongest statistical guarantee achievable through prior conformal planning [31] is 85% success rate, whereas with our method it is 99%. We will emphasize this more strongly in the final version of the paper to clearly justify our empirical claim about the tightness of statistical guarantees.
> *How does the proposed approach enable the new capability of modeling and addressing task safety?*
Our approach introduces several key advancements for modeling and addressing task safety:
- We generate safety-relevant examples for the knowledge base (lines 197-203, 526-540).
- Our approach relies on human-provided valid options to guide the LLM in reasoning about compliance and safety, trusting that human label providers can account for safety when relevant. In Figure 1, we show how these human-provided labels incorporate safety considerations, allowing LLM to generate safety-aware reasoning (e.g., deducing that plastic is unsafe for direct heating on the cooktop).
- We explicitly prompt the LLM to consider safety when forming its introspective judgments at runtime. The prompt for generating the safety-aware knowledge base is provided in lines 653-654 and Table 9, and the knowledge retrieval prompt for inference is provided in Table 12.
We present qualitative results in Figures 3 and 10 to further illustrate the differences between [31] and our approach in addressing safety.
> *What robotics simulations or physical robots are used to create the safe mobile manipulator dataset?*
Both our approach and KnowNo focus on high-level planning through human-labeled mobile manipulation scenarios. The creation of the original Mobile Manipulation, TableTop Rearrangement, and our safe mobile manipulation datasets does not rely on simulations or physical robots. However, these datasets can be utilized for physical experiments. Our approach can be readily combined with the framework in [1] that employs physical robots to execute LLM-chosen plans, although doing so is beyond the scope of our paper. | Summary: The paper tackles the uncertainty quantification problem for task planning with LLMs. Specifically, the paper proposes to first construct a knowledge base using LLM, that contains human-in-the-loop correction and LLM summarization / reflection. Then this knowledge base is used during inference to provide relevant examples in a retrieval-augmented generation (RAG) fashion. Finally, this is combined with conformal prediction to either predict the next action step or ask for clarification due to high uncertainty. Evaluations are performed on a set of text-based task planning datasets and demonstrate improved performance compared to various baselines on multiple metrics.
Strengths: - Uncertainty quantification is an important topic in the context of task planning with LLMs, and the proposed method shows improvement on most of the metrics compared to prior works (and I also find the metrics to be reasonably constructed)
- The paper overall is well-written and easy to follow. Figures are intuitive and helpful for understanding the core contributions.
Weaknesses: - Despite the improvement, the significance of the contribution is slightly unclear when compared to prior work “KnowNo” - it seems that the only difference is that there is an additional “chain-of-thought style LLM summarization” step for the knowledge base and the calibration dataset. Although it is intuitive that additional improvement can usually be gained by chain-of-thought, it remains unclear if the gain is marginal when there is better underlying LLM.
Technical Quality: 3
Clarity: 3
Questions for Authors: See "weaknesses" section above.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: The limitations are described in the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank you for the feedback, acknowledging that our proposed method shows significant improvement compared to prior works in terms of addressing uncertainty alignment. We also thank you for the question and for providing an opportunity to clarify our contribution and its significance.
> *What’s the difference between introspective planning and Chain of Thought? Why is the contribution significant?*
Our approach does indeed utilize Chain of Thought, but it is significantly more advanced. We observed that simply applying Chain of Thought does not effectively address hallucinations and can often lead to overconfident responses. For example, in Table 1, the overstepping rate for Prompt Set + CoT is 30.8%, while our approach achieves a much lower rate of 3.8%. Introspective planning which utilizes retrieval augmentation effectively prompts language-enabled agents to proactively refine their own confidence regarding task compliance and safety for multiple candidate plans, with a guaranteed probability that the agent will either execute the actions desired by the user or ask an appropriate follow-up question to disambiguate the user’s intent (lines 52-55).
One of our core contributions is the construction of the knowledge base. We introduce a new, weakly supervised offline knowledge base construction method that guides the LLM to generate human-aligned introspective reasoning examples as post-hoc rationalizations of human-selected safe and compliant plans. This contribution is discussed in lines 39-42, 56-58 of the paper. As a result, our approach guides the LLM to generate more precise plans, as evidenced by state-of-the-art performance on three datasets across different domains with various metrics.
Additionally, we have created a new Safe Mobile Manipulation benchmark, which enhances previous mobile manipulation datasets by including safety-critical scenarios and introduces new metrics (ESR, NCR, UCR, OAR, OSR, UR) to evaluate a planner’s performance in terms of compliance, safety, and degree of conservatives. Our approach effectively reasons about both compliance and safety, achieving state-of-the-art performance.
> *Is the gain marginal when there is better underlying LLM?*
We conducted experiments using both GPT-3.5 and GPT-4, and the results are presented in Tables 1-7 and Figures 4-7. Our observations indicate that introspective planning significantly improves performance regardless of whether we use a stronger model (GPT-4) or a weaker model (GPT-3.5).
Introspective planning leverages the reasoning capabilities of the language model to achieve superior performance. Therefore, as the underlying LLM becomes more powerful, the potential for introspective planning to enhance performance increases.
---
Rebuttal Comment 1.1:
Title: Response
Comment: Thank you for the response -- it has addressed my questions, and I'm inclined to keep my rating.
---
Reply to Comment 1.1.1:
Comment: Thanks for your response! We appreciate your review and feedback! | Summary: This paper presents "introspective planning" as a method to enhance the reliability and safety of robotic task planning using large language models. The proposed method uses introspective reasoning to address uncertainty through a knowledge base consisting of sets of tasks, observations, and candidate plans, along with a rationale for plans to better align with user intent. Additionally, the authors introduce the Safe Mobile Manipulation benchmark with safety-critical scenarios and new evaluation metrics.
Strengths: 1. The ability to generate uncertainty over robot tasks, especially in cases of unsafe operations, is particularly important.
2. The paper is well motivated in aligning LLM-generated tasks with the true user intent, given possibly ambiguous inputs.
3. The knowledge base includes a rationale for each task which expands on the KnowNo framework for generating confidence scores.
4. The new additional safety benchmark adds important safety scenarios to existing datasets to better evaluate planning under uncertainty systems.
5. The method is described well, and the availability of the code gives the opportunity for the broader community to build on this work.
Weaknesses: 1. The core components of the work are well established. In the proposed direct method, the incremental improvements to existing work are primarily the introspective approach where the LLM is used to generate a rationale for the plan. The conformal prediction method additionally uses a knowledge base to produce statistical guarantees about the prediction which is very important for safe operation. However, the results show significant performance gaps between the direct and conformal prediction. Given this, it would be good to provide a robust analysis of this tradeoff in a general setting.
2. The paper focuses primarily on manipulation tasks that involve somewhat ambiguous items in a kitchen setting such as disambiguating between two sodas in the scene or having the knowledge that a plastic object shouldn’t go into an oven. It would be helpful to include other domains to better show the generalizability of the method.
3. The knowledge base is a key component of the conformal prediction. The authors compare different sizes of knowledge bases, but little is given towards how these should be constructed for a given task and domain. More details on the variations that a user needs to generate should be given. It seems the user needs to be well aware of the failure mode of the tasks to generate an appropriate knowledge base.
4. There may be bias in the user-generated knowledge base, especially for multiple users. It would be good to show how these affect the performance of the system. Perhaps evaluating a system with multiple users without sharing the intended goals would show how well the system aligns across multiple users and a single knowledge base.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. It wasn’t initially clear to me whether the knowledge base contains the explanation. Are these stored after construction - they are not in the data files in the code repo.
2. The experimental exploration of the knowledge-base size contained in the appendix is appreciated. Is there a reason that the other metrics were not included in this evaluation? Do you have insights into why the performance drops with the largest knowledge base and what was the variation in tasks contained in the knowledge base?
3. What is the result when the user provides an ambiguous answer when the system seeks clarification?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: 1. As the authors point out, one limitation is that the conformal prediction seems to perform significantly worse than direct prediction. Particularly interesting is that the Unsafe Rate is worse for conformal prediction than direct, which might raise the question as to how much the uncertainty measurement is contributing to safer outcomes.
2. The method is evaluated on a limited diverse set of tasks in a kitchen manipulation setting.
3. The authors make a good observation about multi-label prediction and more investigation is needed to understand the limits around truly ambiguous tasks.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the positive and thoughtful feedback!
> *Why are there significant performance gaps between the direct and conformal predictions and what is the trade-off?*
We thank the reviewer for pointing this out. We discussed the trade-off between direct and conformal prediction in lines 219-225. While direct prediction provides more accurate predictions, it cannot guarantee success. Conformal prediction can guarantee success but performs worse than direct prediction. We acknowledged this limitation in lines 318-319. We hypothesized that the performance gap between direct and conformal prediction could be due to the misalignment between the first-token probabilities (conformal prediction) and text answers (direct prediction) [2, 3], but this still requires extensive additional analysis. We agree that understanding and analyzing this performance gap more deeply is crucial and future work should aim to reduce it.
> *The paper focuses primarily on manipulation tasks that involve somewhat ambiguous items in a kitchen setting. It would be helpful to include other domains to better show the generalizability of the method.*
In addition to mobile manipulation and safe mobile manipulation tasks, we also conducted experiments on the Tabletop Rearrangement domain. This task involves moving colored blocks and bowls on a table according to specific instructions. These instructions are designed to include ambiguities in attributes (such as alternative names for objects and colors), numbers (using vague terms for quantities), and spatial relationships (using general terms for directions and orientations). The experimental results for this dataset are provided in Appendix A, and a detailed description of this dataset is in Appendix B.
> *What should the human provide to generate our knowledge base?*
To construct the knowledge base, we query the LLM to generate a set of candidate plans, conditioned on the task, the observation, and hand-crafted few-shot examples. The user then needs to provide all the valid options $\mathcal{G}_i$ (alphabetic labels such as A, B, C, D). We prompt the LLM to produce rationales k_i based on these labels. Specifically, we use in-context learning with few-shot examples to guide the LLM in generating explanations of why certain options are valid according to the ground truth.
We provided a detailed explanation of constructing the knowledge base in lines 74-85 and showed an example in Figure 1.
> *Should users be aware of the ambiguity and safety?*
Yes, users should be aware of ambiguity and safety when providing the labels. This awareness ensures that our approach guides the LLM to generate uncertainty- and safety-aware reasoning, resulting in predictions that are both safe and compliant with the user’s request.
> *There may be bias in the user-generated knowledge base, especially for multiple users. It would be good to show how these affect the performance of the system. Perhaps evaluating a system with multiple users without sharing the intended goals would show how well the system aligns across multiple users and a single knowledge base.*
We agree that reducing bias in the knowledge base is important. With less bias, we can use a smaller knowledge base to achieve high performance. When collecting data for the knowledge base, we ensured that users were aware of ambiguity and safety and had varied intended goals, thus maintaining high knowledge quality. Consequently, during inference, we evaluated the performance with users having different intended goals, and the results show that our system can effectively retrieve relevant and helpful knowledge to guide the LLM in reasoning about compliance and safety.
> *Does the knowledge base contain the explanation? Are these stored after construction?*
Yes, the knowledge base contains the explanations $k_i$, which are stored after construction. These explanations serve as important introspective reasoning examples that guide the LLM to generate more human-aligned reasoning. This procedure is described in lines 76-85 and Algorithm 1. We will emphasize this point in the final version of the paper.
> *Is there a reason that the other metrics were not included in the evaluation of performance vs knowledge bas size? Why does the performance drop with the largest knowledge base and what was the variation in tasks contained in the knowledge base?*
We thank the reviewer for acknowledging our studies in Appendix C. The goal of this section is to show how the size of the knowledge base influences performance. Among all the evaluation metrics, Exact Set Rate (ESR), which evaluates the model’s ability to generate precise responses, and Success Rate (SR) are the most representative and clearest for supporting our conclusions. We added a plot with additional metric results (OAR, OSR, NCR) on Mobile Manipulation in the pdf.
We evaluated performance on Mobile Manipulation and Safe Mobile Manipulation. With the maximum knowledge base size, there is a slight performance decrease in Mobile Manipulation, while performance remains almost the same for Safe Mobile Manipulation. We hypothesize that this slight decrease could be due to shortcomings in the retrieval process. Although our approach retrieves examples relevant to the target instruction, it does not always guarantee retrieving the most helpful knowledge. A larger knowledge base includes more diverse scenarios but we might also retrieve less relevant knowledge. However, we observed that these cases are rare and do not significantly impact overall performance.
> *What is the result when the user provides an ambiguous answer when the system seeks clarification?*
We do consider scenarios where the user could have multiple intents. For example, if the user asks for "bring me a soda" when there are both Coke and Sprite, selecting either option is considered a success. This differs from "bring me that soda," where the user has a specific intent regarding which soda.
---
Rebuttal Comment 1.1:
Title: Response
Comment: Thank you providing additional clarity to my questions and comments. Also, thank you for providing the additional plot on the size of the knowledge bag. This clarifies my understanding of the sytem and confidence in my rating.
---
Reply to Comment 1.1.1:
Comment: Thanks for your response! We appreciate your review and feedback! | Rebuttal 1:
Rebuttal: We sincerely appreciate the time and effort that all reviewers and area chairs have dedicated to providing valuable feedback and constructive advice on our manuscript. We are encouraged by the consensus among reviewers on the importance of guiding language agents to reason about their own uncertainty and ensure safety during decision-making. Detailed responses to each review are provided in the sections below. The attached PDF contains a figure that evaluates the influence of the knowledge base size on a few additional metrics (OAR, OSR, and NCR), addressing the comment from Reviewer Gr1N.
Here we listed all the required references that we used in the rebuttal where [1, 19, 31] are in the paper with the same index.
[1] Ahn, Michael, et al. "Do as I can, not as I say: Grounding language in robotic affordances." arXiv preprint arXiv:2204.01691 (2022).
[19] David B. Leake. Introspective Learning and Reasoning, pages 1638–1640. Springer US, Boston, 390 MA, 2012.
[31] Ren, Allen Z., et al. "Robots that ask for help: Uncertainty alignment for large language model planners." arXiv preprint arXiv:2307.01928 (2023).
[2] Lyu, Chenyang, Minghao Wu, and Alham Fikri Aji. "Beyond probabilities: Unveiling the misalignment in evaluating large language models." arXiv preprint arXiv:2402.13887 (2024).
[3] Wang, Xinpeng, et al. "" My Answer is C": First-Token Probabilities Do Not Match Text Answers in Instruction-Tuned Language Models." arXiv preprint arXiv:2402.14499 (2024).
Pdf: /pdf/993c70e5a3b6762a79ed00bcee6b834212530f28.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | Summary: This paper proposes a new method of using LLM to do task planning. The key innovation is a retrieval-augmented generation (RAG) where the LLM retrieves few-shot introspective reasoning examples from a knowledge base that contains examples with supervised labels. The authors integrate the RAG into LLM to either (1) make it directly predict the best plan, and (2) make it use conformal prediction to predict the best plan. Results have shown that the proposed method, LLM + introspective reasoning + direct prediction outperformed all baselines. And LLM + introspective reasoning + conformal prediction outperformed all baselines with conformal predictions. However, LLM + introspective reasoning + direct prediction outperformed LLM + introspective reasoning + conformal prediction, which is an open question for future work.
Strengths: - The method is sound where the main message is very simple - introspective reasoning based on a dataset with ground truth labels is helpful for LLM planning.
- The writing is very clear.
- This paper also proposes new evaluation metrics, which more comprehensively measures planners' performance.
- The experiment is rich with 3 problem domains with various baseline methods, including the non-conformal-prediction-based and the conformal-prediction-based.
- The result has shown the benefit of incorporating introspective reasoning.
Weaknesses: Besides the limitation discussed in the end of the paper, there are two other potential limitations:
- The proposed method relies on a knowledge base with human supervised labels, which may require significant human efforts to construct. It might be useful to discuss the cost of such knowledge base.
- The usage of both introspective planning and conformal prediction might increase the computation load for inference. It might be useful to discuss the increased computation.
Technical Quality: 3
Clarity: 4
Questions for Authors: - In Eq.2, why choosing (N+1)(1-ε)/N?
- Line 135-141 highlights the difference of the proposed work vs [31]. Just to make sure I understand it correctly, the key difference is incorporating `k` in Eq.3?
Confidence: 3
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: Limitations are discussed in the end of the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the positive feedback, acknowledging that our method is sound, the writing is clear, and the proposed metrics comprehensively measure the planner's performance. We also thank you for recognizing the valuable experimental results we have presented.
> *The proposed method relies on a knowledge base with human supervised labels, which may require significant human efforts to construct. It might be useful to discuss the cost of such knowledge base.*
In our approach, humans only need to provide labels (instead of explanation or reasoning), and we leverage the LLM to generate the reasoning based on these labels, which are then stored as knowledge. This significantly reduces the human effort required. Labeling the knowledge base data with $400$ examples takes approximately $3$ hours.
> *The usage of both introspective planning and conformal prediction might increase the computation load for inference. It might be useful to discuss the increased computation.*
Using introspective planning does indeed increase the computation load for inference. We discussed the cost and efficiency in Appendix D. For our experiments on Mobile Manipulation with a calibration set size of 400, a test set size of 400, and a knowledge base size of 400, KnowNo requires 35k completion tokens and 376k prompt tokens in total. Our approach requires three times the completion tokens and twice the prompt tokens. Consequently, the total API cost for KnowNo is $4.8, while ours is approximately $9.8. Despite the increased cost, our method significantly improves the Exact Set Rate, with KnowNo achieving only 37% compared to our method's 94.5%.
> *In Eq.2, why choosing (N+1)(1-ε)/N?*
To ensure the prediction set meets the desired coverage level with the specified confidence $1 - \epsilon$, we use the quantile threshold defined as follows:
$\hat{q} = \text{Quantile}(s_1, ..., s_N ; \frac{\lceil (N+1)(1-\epsilon) \rceil}{N})$
This means $\hat{q}$ is the $\lceil (N+1)(1-\epsilon) \rceil$-th smallest value among the nonconformity scores $s_1, \ldots, s_N$. When we compute $\hat{q}$, we are effectively selecting this specific quantile to ensure the desired coverage. For a new task $x_{\text{test}}$, we include all labels $y$ in the prediction set $\hat{\mathcal{G}}_{\text{test}}$ such that:
$\hat{f}(y \mid x_{\text{test}}, \mathcal{C}_{\text{test}}, k) \geq 1 - \hat{q}$
This criterion ensures that the prediction set $\hat{\mathcal{G}}_{\text{test}}$ includes all labels $y$ where the confidence level $\hat{f}$ meets or exceeds $1 - \hat{q}$. With this construction, we achieve the following coverage guarantee:
$\mathbb{P}(z_{\text{test}} \in \hat{\mathcal{G}}_{\text{test}}) \geq 1 - \epsilon$
This guarantees that the true intent $z_{\text{test}}$ is included in the prediction set $\hat{\mathcal{G}}_{\text{test}}$ with a probability of at least $1 - \epsilon$.
> *Line 135-141 highlights the difference of the proposed work vs [31]. Just to make sure I understand it correctly, the key difference is incorporating k in Eq.3?*
Yes, the key difference compared to [31] is the incorporation of the introspective planning rationale $k_i$ in our framework. This rationale is generated through the process described in lines 74-85, and it significantly improves planner performance in both compliance and safety. It also refines LLM uncertainty and provides a tighter bound of guarantee.
---
Rebuttal Comment 1.1:
Title: keeping my score
Comment: Thank you for the detailed explanation. After reading other reviews, it seems that one main weakness is whether integrating introspective reasoning is sufficiently novel. As I am not familiar with the literature in LLM for planning, I will defer to other reviewers about evaluating the contribution. For now, I am inclined to keep the score.
---
Reply to Comment 1.1.1:
Comment: We really appreciate your response and efforts in the review process! We have included additional notes regarding the novelty of introspective planning in the replies of other reviewers, and I hope they can be helpful as well. | null | null | null | null | null | null |
iVideoGPT: Interactive VideoGPTs are Scalable World Models | Accept (poster) | Summary: This work studies the setting of planning and prediction from video world models. It proposes to use a GPT-style transformer world model that incorporates action and reward information in it's context and prediction pipeline. The model is further equipped with a novel tokenization technique based on VQGAN. Finally, the model is evaluated on three downstream applications, namely video prediction, planning and model-based RL. Ablations on the models and tokenizers scalability are conducted.
Strengths: Problem Statement
* The problem of studying dyna-style video algorithms with foundational models is interesting and warrants investigation.
Clarity
* The paper is very well written and the language is clear
* The paper follows a very concrete thread and is easy to follow
* All plots and figures are well-formatted and easy to understand
Related Work
* The treatment of related work is done very nicely. The paper cites a total of 120 other works which is not often seen and thus formidable.
* The work highlights older as well as recent work and positions itself well within the literature
Experiments
* The experiments contain a large number of baselines and comparisons that are useful to understand the capabilities of the model for downstream return maximization.
Weaknesses: Clarity
* In some cases, the writing makes very strong statements that are very broad and hard to be supported by evidence. E.g.
* L97 "This model can acquire broad world knowledge". This statement is not supported and it is unclear to me what "broad world knowledge" means. I believe concise language about the capabilities of our algorithms is important.
Motivation
* One issue with the paper is that some of the motivation is not quite clear to me. The paper argues that the provided world model has two benefits: It is interactive and scalable.
* First, it is not clear to me what it means for this model to be interactive. If the condition for a model is to be action-conditioned, then the dreamer world-model should also be interactive. In fact, even much older world-models would be interactive [1, 2].
* Second, it is not clear to me that other models would not be scalable and the paper provides no evidence that they are not scalable. The text states in line 91 that dreamer lacks scability, however not citation or evidence for this is provided. For instance, dreamer uses state-space models (SSMs) and recent advances have built scalable SSMs [1].
Experiments and Claims
* For the tokenizer, the text claims that it "decouples dynamics information from context conditions" L161. This seems like a strong statement that would warrant evidence to be supported. I believe qualitatively looking at a small number of image sequence is insufficient to support such a strong claim about composition.
* Overall, the experimental results seem a little weak.
* The model is outperformed on most tasks in the video-prediction setting. It is unclear why the one of the weakest baselines was used for the for action-conditioning comparison in section 4.1.
* In the visual planning experiment, the performance exceeds baselines in only 2/7 tasks.
* In the model-based RL experiments, the performance is mostly comparable to dreamer. There are cases where the paper argues that their suggested algorithm outperforms dreamer but given that the results are only reported over $5$ random trials, it seems that the results are within variance making this claim too vague. (It’s a little hard to tell because shaded regions are very transparent)
* The benefits of scalability for the downstream control tasks have not been demonstrated.
* The work argues that high perceptual metrics don't necessarily correlate with good control performance (App B.2) but the main benefit of scale seems to be improved performance on those metrics.
* While the work shows in section 4.4 that scaling leads to lower validation loss, whether or not this is correlated with downstream performance is not demonstrated. I believe this point is important because we have to ask what is the marginal benefit of increasing computational complexity to train larger and larger models, if the application to downstream tasks does not benefit much from scale. Showing that larger networks and lower error lead to improved downstream performance is an experiment that might validate the need for scale.
* Similarly, the need for specialized tokenization has not been argued for downstream applications. It seems a little detached from the goal. The main goal of the paper is to argue for scalability and that this tokenization technique is helpful. However, for the downstream applications, neither the planning nor RL sections demonstrate the importance of scaling or tokenizing. One experiment I can think of is to demonstrate that 4x4 tokenization is insufficient due to reconstruction quality this claim. The relationship between scaling and the novel tokenization is not analyzed either.
Overall, I think the motivation of the work is not quite clear enough, the contributed algorithm provides minor improvements in the control settings and the claims for interactivity and scalability could be strengthened.
* The text claims that the tokenization that is presented is more efficient but there is no experiment validating that this is true. A complex encoder structure is used and it is unclear whether this is actually more efficient. An experiment to validate this could measure total wallclock times on the same hardware.
[1] Mamba: Linear-Time Sequence Modeling with Selective State Spaces. Albert Gu, Tri Dao. arXiv preprint arXiv:2312.00752.
[2] Action-Conditional Video Prediction using Deep Networks in Atari Games. Junhyuk Oh, Xiaoxiao Guo, Honglak Lee, Richard L. Lewis, Satinder Singh. Advances in Neural Information Processing Systems 28 (NIPS 2015).
Technical Quality: 2
Clarity: 2
Questions for Authors: Q1: What are the training times of your algorithm training and how do they compare to the runtimes of Dreamer?
Q2: Can you elaborate on the term interactive and why other models are not interactive?
Q3: What are the key differences between the proposed model and the MaskVIT model and to which of the difference do you attributed the improved action-conditioned prediction performance in section 4.1?
Confidence: 3
Soundness: 2
Presentation: 2
Contribution: 4
Limitations: The paper contains an explicit section outlining limitations of the work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely thank Reviewer 6ajb for providing a detailed review and insightful questions. We have **responded to common questions in the global response and individual questions below**.
### 1. Motivation of iVideoGPT: Best of both interactivity and scalability
In $\underline{\text{Q1 of global response}}$, we have clarified that iVideoGPT is motivated by two lines of research and aims to achieve both interactivity and scalability.
To specifically answer Reviewer 6ajb's question:
- Interactivity: We define the term "interactive" as that a model can predict future steps incrementally, conditioned on intermediate actions (as defined in Sec 2 of paper). While **Dreamer and many classic models do have interactivity**, we remark that **many advanced video generation models do not**. These include well-known Sora and action-conditioned autoregressive models like VideoGPT as shown in Fig 2b of paper.
- Scalability:
- Dreamer uses an RNN architecture which has weaker scalability (**defined and discussed with evidence in global response**).
- Why scalability is important? Pre-training foundation models with large-scale data is a well-established paradigm in many other fields of deep learning (e.g. languages and the closely related field of video generation), providing impressive success, but is **under-explored and not yet commonly applied in model-based control**.
- While the reviewer mentioned Mamba, we argue that Mamba's macro architecture is highly akin to Transformers, with interleaved temporal modules (SSM) and temporal-independent MLPs, which is far from Dreamer. To our knowledge, **Mamba has not been used in world models and no such Dreamer variants exist in the literature**.
### 2. Tokenization
***Context-dynamics decomposition***
To support the decomposition property of the tokenization, we visualized the decoder's reconstruction in $\underline{\text{Fig 1 of attached PDF}}$ by removing cross-attention for future frames. We observed that, without cross-attention, the decoder still reconstructs a trajectory **moving in the same way** as the original, but the **context is almost entirely missing**.
***Computational efficiency***
We measure time and memory usage on the same hardware, which is reported in $\underline{\text{Q2 of global response}}$. Despite the complexity of our encoder-decoder structure, it only takes up a little more resources than a vanilla 16x16 per-frame tokenizer and saves a lot during training and generation of the transformer.
***Relationship with scaling***
Efficient tokenization greatly saves computational comsuption for training and generation (rollouts), allowing us to **scale up the model size with fewer costs**.
### 3. Experiments
Our experiments aim to demonstrate that **one architecture and pre-trained model can be adapted into various downstream tasks to achieve competitive performance**. Although we do not significantly outperform state-of-the-art methods on all tasks, we believe it is a valuable step.
***Video prediction baselines***
We only compare with MaskViT on action-conditioned BAIR and high-resolution RoboNet settings, since all other models do not report these settings and many strong baselines such as FitVid or MAGVIT do not release official training codes or instructions.
***Visual planning performance***
We find that mixed results across models in the VP2 benchmark can be primarily attributed to its imperfect built-in reward design. Please refer to Q4 in global response for details.
***Significance of MBRL experiments***
To better show the statistical significance of our superior performance, we follow the protocol from [1] and report aggregated performance using Inter-Quantile Mean across all 30 runs in $\underline{\text{Fig 5 of attached PDF}}$. It shows that **our method significantly outperforms DreamerV3**, a well-tuned strong baseline for this setting.
[1] Deep reinforcement learning at the edge of the statistical precipice
***Control benefits of scalability***
There are two kinds of scaling: **model scaling and data scaling**. In our experiments, we currently do not find that further model scaling can significantly improve performance on Metaworld tasks (iVideoGPT-436M performs on par with 138M), likely because Metaworld is a visually simple simulation environment. However, as shown in Fig. 6 of the paper, **data scaling does provide benefits**---pretraining iVideoGPT enhances MBRL performance. In contrast, Dreamer's lack of scalability limits its potential to benefit from pre-training (see Figs 2 \& 5 in attached PDF).
### 4. Others
***MBRL training time***
Our method in PyTorch takes ~4 hours on 4090 to train for 100k environment steps. Official DreamerV3 in JAX takes ~1 hour. When fairly compared under the same framework, a DreamerV3 implementation in PyTorch takes ~3.5 hours for 100k steps, comparable to ours.
***Difference with MaskViT***
iVideoGPT and MaskViT are **fundamentally different types of generative models**. MaskViT is a masked model that generates all video frames simultaneously through masked reconstruction, typically trained for fixed-length videos. In contrast, iVideoGPT is an autoregressive model, allowing flexible video generation of various lengths. Additionally, as discussed in $\underline{\text{Sec 4.1 of paper}}$, MaskViT uses per-frame tokenization and suffers from temporal inconsistencies, while iVideoGPT employs a novel **tokenization conditioned on consistent contextual information**, contributing to its improved performance.
***Clarification on claims***
We apologize for the strong statements and will revise them for accuracy:
- **"broad world knowledge"** => "common knowledge of motions and interactions in various scenes through pre-training on diverse human and robotic manipulation videos."
- **"decouples dynamics information from context conditions"** => "designed to encourage the decoupling of dynamics information from context conditions."
---
Rebuttal Comment 1.1:
Comment: Dear authors, thank you for the thorough response. My initial questions have been answered satisfactory. I also appreciate all the efforts you put into additional experiments.
I have one follow-up question since I'm not too familiar with vision tokenizers for video prediction.
Q5: In the paper, the text states "Instead of [...] using a 3D tokenizer that compresses videos spatiotemporally at the expense of interactivity, we propose to tokenize videos with a novel conditional VQGAN"
Could you elaborate on why we cannot simply use the VQVAE encoder from the original VideoGPT and why that loses interactivity? What exactly distinguishes this tokenizer from VQVAE?
---
Reply to Comment 1.1.1:
Title: Response to Post-rebuttal Feedback by Reviewer 6ajb
Comment: Dear Reviewer 6ajb,
Thank you again for your time and effort in reviewing our paper. We appreciate your careful review of our rebuttal materials and your recognition of our efforts in addressing your initial concerns.
Regarding your follow-up question: What is the difference between the tokenizer in VideoGPT and ours, and how do they impact interactivity?
VideoGPT uses a VQVAE for video that relies on **a series of 3D convolutions to downsample across space and time**. For example, it downsamples original pixels from $16 \times 64 \times 64$ to discrete tokens of $8 \times 32 \times 32$ or $4 \times 16 \times 16$ (depending on the downsampling ratio). The key issue is that this non-causal downsampling over the temporal dimension results in each token containing information from a window of frames. As a result, **the entire video can only be reconstructed after VideoGPT generates all tokens**. As shown in Fig. 2b of our paper, **VideoGPT only allows the input of future action sequences at the beginning of prediction, preventing an agent from interactively determining its actions based on predicted observations**. In contrast, our tokenizer discretizes video frames separately, using a conditional mechanism to handle temporal redundancy, **enabling frame-by-frame video generation and allowing for intermediate action intervention**.
Moreover, our tokenizer’s novel design, with its cross-attention mechanism, is more efficient in handling temporal redundancy, converting videos into significantly fewer tokens ($L=511$ with $N=256, n=16, T=16, T_0=1$ as stated in Line 124). In contrast, VideoGPT finds that using a larger downsampling ratio than a token size $8 \times 32 \times 32$, results in worse performance.
We hope this response addresses your remaining concerns, and we remain open to further discussion. If our response is satisfactory, we kindly ask you to consider re-evaluating your rating of our work based on the clarified understanding.
Best regards,
Authors
---
Rebuttal 2:
Title: Required Action: Please Respond to the Author Rebuttal
Comment: Dear Reviewer 6ajb,
As the Area Chair for NeurIPS 2024, I am writing to kindly request your attention to the authors' rebuttal for the paper you reviewed.
The authors have provided additional information and clarifications in response to the concerns raised in your initial review. Your insights and expertise are invaluable to our decision-making process, and we would greatly appreciate your assessment of whether the authors' rebuttal adequately addresses your questions or concerns.
Please review the rebuttal and provide feedback. Your continued engagement ensures a fair and thorough review process.
Thank you for your time and dedication to NeurIPS 2024.
Best regards,
Area Chair, NeurIPS 2024 | Summary: This paper introduces Interactive VideoGPT (iVideoGPT), which builds world model based on VideoGPT architecture. iVideoGPT proposes compressive tokenization, and the model is trained using millions of human or robot videos (i.e., Open X embodiment dataset). The effectiveness of iVideoGPT is demonstrated in video prediction, visual planning, and visual model-based reinforcement learning.
Strengths: - The empirical validation is extensive including video prediction, visual planning and visual model-based reinforcement learning.
- The scalability of iVideoGPT is impressive.
Weaknesses: - Technical novelty is somewhat limited.
Technical Quality: 4
Clarity: 4
Questions for Authors: - What is main difference between iVideoGPT and IRIS [1] for visual model-based reinforcement learning of visual planning?
[1] Micheli, V., Alonso, E., & Fleuret, F. (2022). Transformers are sample-efficient world models. arXiv preprint arXiv:2209.00588.
Confidence: 3
Soundness: 4
Presentation: 4
Contribution: 4
Limitations: See Weakness & questions.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely appreciate Reviewer yh7R for providing a thorough review, valuable questions, and a positive evaluation of our paper.
### Q1: Technical novelty
Discrete tokenization and autoregressive transformer are prevelant in contemporary deep learning, due to their simplicity and generality. iVideoGPT generally shares this architecture with IRIS, but possesses distinguishing features (discussed below in Q2).
Despite adopting a widely used architecture, we believe the most important contribution of our paper lies in **advancing a paradigm of pre-training a scalable world model architecture on large-scale data and adapting it into various downstream tasks**. This paradigm is well-established in many other fields of deep learning (e.g., languages and the closely related field of video generation) but is not yet commonly applied in MBRL (see $\underline{\text{Q1 of global response}}$ for extended discussion).
### Q2: Difference between iVideoGPT and IRIS
We summarize four key features of iVideoGPT that are different from IRIS:
1. Pre-training and fine-tuning paradigm enhancing sample efficiency;
2. Novel and efficient tokenization enabling **faster rollouts** (**~24x**, see Q2 in global response);
3. Flexible action-conditioning design;
4. Off-policy MBRL implementation.
All of these features are highly relevant to visual planning and visual MBRL. Please refer to $\underline{\text{Q3 of global response}}$ for a more detailed discussion.
---
Rebuttal 2:
Title: Required Action: Please Respond to the Author Rebuttal
Comment: Dear Reviewer yh7R,
As the Area Chair for NeurIPS 2024, I am writing to kindly request your attention to the authors' rebuttal for the paper you reviewed.
The authors have provided additional information and clarifications in response to the concerns raised in your initial review. Your insights and expertise are invaluable to our decision-making process, and we would greatly appreciate your assessment of whether the authors' rebuttal adequately addresses your questions or concerns.
Please review the rebuttal and provide feedback. Your continued engagement ensures a fair and thorough review process.
Thank you for your time and dedication to NeurIPS 2024.
Best regards,
Area Chair, NeurIPS 2024 | Summary: This paper introduces a new architecture for an action-conditioned video (and reward) prediction model.
First, a VQGAN converts context frames individually into tokens.
Second, a conditional VQGAN converts future frames individually into tokens, conditioned on the context frames (using intermediate representations from the context encoder).
The idea is that the future encoder can focus on encoding changes in the scene, since the remaining information was already extracted from the context, allowing for a lower number of tokens.
Then, an autoregressive transformer is trained to predict the next (future) token and is optionally conditioned on actions and can optionally predict the rewards in a reinforcement learning setting.
The model is pre-trained on action-free robotic manipulation videos and then fine-tuned and evaluated on video prediction, visual planning, and model-based reinforcement learning tasks.
Strengths: - S1: Encoding the dynamics information conditioned on context information is an intriguing idea for world models.
- S2: The paper is clearly written and the visualizations are informative and appealing.
Weaknesses: - W1: The paper is missing quantitative comparisons of the computational efficiency. For instance, it would be interesting to see the differences in training/inference time when using the different tokenizers (Figure 8(c)).
Furthermore, the authors state at the end of Section 2 that recurrent world models like Dreamer are not scalable, but I'm wondering how significant the difference is, considering the autoregressive next token prediction and use of multiple tokens per frame.
- W2: The authors argue that iVideoGPT eliminates the need for latent imagination in model-based RL (Section 4.3 and Figure 5).
However, there are existing world models (e.g. IRIS [1]) that learn behaviors using reconstructed frames.
Moreover, Dreamer could also learn behaviors this way by using the reconstructions from the decoder (but I understand that this would be a different method).
In short, I don't think that this is a feature that is novel or specific to iVideoGPT.
- W3: The proposed model has a lot in common with IRIS [1], which learns a discrete autoencoder (VQVAE) and uses an autoregressive transformer for next token prediction.
IRIS is briefly mentioned in the related work section, but I think the authors could emphasize the differences in more detail.
Some typos:
- 79: "aims"
- 80: "maximize"?
- 85: "history of $T_0$ video frames"?
- 86: "needs to"
- 107: $D_c(z_t)$ should be $D_c(z_t^{(1:N)})$?
- Also in Eq. (1) it should be $D_p(z_t^{(1:n)}|o_{1:T_0})$?
- 687: Table 2 -> Table 3
I am willing to increase my scores after the listed weaknesses have been addressed.
Technical Quality: 2
Clarity: 3
Questions for Authors: - Q1: Is the world model also conditioned on the actions from the context? In line 87 the condition only includes $a_{T_0:t}$, but in the sequence of tokens $x$, the context also includes slot tokens. This can also not be recognized in Figure 3(b).
Confidence: 4
Soundness: 2
Presentation: 3
Contribution: 3
Limitations: The authors addressed all limitations adequately.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely thank Reviewer zSnp for the thorough review and valuable questions. We also appreciate the pointed-out typos, which we will correct in a future revision.
### W1: Computational efficiency
We apologize for missing a quantitative efficiency analysis. We have reported training/inference time and memory usages with various tokenizers, in $\underline{\text{Q2 of global response}}$. Our proposed compressive tokenization provides **significant memory savings during training and faster rollouts during generation**.
#### Discussion on Dreamer
While we have enhanced the scalability of Transformer-based world models through compact tokenization and resource savings, Dreamer with an RNN architecture may still be more efficient computationally. However, as demonstrated in $\underline{\text{Fig 2 of attached PDF}}$, when pre-training Dreamer XL (200M parameters, comparable to iVideoGPT) on the same dataset as us, we find that it has **insufficient capacity to support large-scale pre-training**, which is crucial for the success of modern foundation models. The results in $\underline{\text{Fig 5 of attached PDF}}$ further prove that Dreamer is **unable to benefit from such ineffective pre-training**.
### W2: Eliminating latent imagination
We do not claim that eliminating latent imagination is a unique feature of iVideoGPT. However, since latent imagination is currently the dominant practice in MBRL (as seen in Dreamer, MuZero, etc.), we believe it is an advantage for iVideoGPT to simply serve as a plug-in replacement of the environment. As discussed in Sec 4.3, this can simplify the design space, reduce implementation complexity, and enhance the practicality of MBRL.
The key to this advantage is a powerful world model. Previous action-conditioned video prediction models like FitVid, without sufficient capacity and pretrained knowledge, can also function similarly to iVideoGPT but can **produce more blurring predictions** ($\underline{\text{Fig 4 of attached PDF}}$), thus **hinders MBRL directly on top of these inaccurate frames** ($\underline{\text{Fig 5 of attached PDF}}$). These results may explain why Dreamer employs latent imagination in MBRL. While IRIS shares a similar architecture with iVideoGPT, it lacks efficient tokenization and large-scale pre-training (as discussed below).
### W3: Difference with IRIS
We summarize four key differences between our approach and IRIS. Please refer to $\underline{\text{Q3 of global response}}$.
### Q1: Conditioning on the context actions
In our implementation, iVideoGPT is not conditioned on context actions. However, it can easily be extended to support this by adding the embedding of context actions to the slot tokens between context frames. We hypothesize that this would not significantly impact performance, as context actions can likely be inferred implicitly from context frames.
---
Rebuttal 2:
Title: Required Action: Please Respond to the Author Rebuttal
Comment: Dear Reviewer zSnp,
As the Area Chair for NeurIPS 2024, I am writing to kindly request your attention to the authors' rebuttal for the paper you reviewed.
The authors have provided additional information and clarifications in response to the concerns raised in your initial review. Your insights and expertise are invaluable to our decision-making process, and we would greatly appreciate your assessment of whether the authors' rebuttal adequately addresses your questions or concerns.
Please review the rebuttal and provide feedback. Your continued engagement ensures a fair and thorough review process.
Thank you for your time and dedication to NeurIPS 2024.
Best regards,
Area Chair, NeurIPS 2024
---
Rebuttal Comment 2.1:
Comment: Thank you for the detailed response. I have updated my score accordingly.
---
Reply to Comment 2.1.1:
Title: Appreciation for Your Support
Comment: Dear Reviewer zSnp,
We sincerely appreciate your dedicated re-evaluation of our paper and the subsequent positive rating. Your feedback has significantly improved our work.
Best regards,
Authors | Summary: The authors present a paper that attempts to utilize actionless and action conditioned trajectories to learn a large scale interactive world model. This model is subsequently adapted for robot manipulation tasks. It is evaluated on video prediction, visual planning, model based RL. The model training is tested primarily on Metaworld in the embodied setup. The experiments demonstrate the generalization capabilities of the model.
Strengths: 1. The paper is well written, clear and easy to understand
2. The idea is well motivated. The question of utilizing internet scale knowledge for embodied intelligence is an open area of research.
3. The experiment results are statistically significant with multiple runs reported accompanied by error bars
4. The authors present a novel tokenize scheme which can benefit other video foundation models
Weaknesses: 1. **Performance on Video prediction:** The performance for the visual planning is rather unsatisfactory. Do the authors have any intuition about why their method only outperforms the baselines in 2 of the setups?
2. **No analysis on amount of action data needed:** During the motivation of the method, the authors discuss coming up with methods that are able to learn from freely available videos. Robot data on the other hand is expensive. Thus, ablations on how much robot data is needed are necessary to understand the dependence of the model on robot specific data and whether or not the method is indeed benefitting from freely available human videos.
3. **Lack of robot experiments:** Does the model extend to any real robot setups? Currently, the only low level control robot experiments available are on 6 in which the model matches the performance in 3.
4. **No human user studies:** the paper only uses numerically metrics like Frechet Video Distance to judge the quality of generation. This metric does not always align with quality. The results of such a study would help bolster the quality of the work.
5. **Some missing Baselines**: FitVid does indeed have an action conditioned model and this model could be used for all the MBRL experiments. This currently missing.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Is the dreamer baseline in the paper trained on OpenX data also? If not then the comparison is not fair.
2. How do you ensure that the $z^(1:n)_t$ contain all the information about future frames and that there is no leakage between which tokens contain what information?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: 1. The evaluation suite is currently very small should be expanded
2. Its unclear how much the human videos from Something-something-v2 help the method. If they do not help the method significantly, then this diminishes the contributions of the paper significantly as this would make it a study on the application of transformer based world modeling on large robot datasets like OXE.
Inspite of these limitations and weaknesses, I think the work makes interesting contributions and I would inclined to increase my score if the authors are able to address my questions.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to sincerely appreciate Reviewer HkK9 for the comprehensive review and insightful questions. We have **addressed common questions in the global response and indivual questions below**.
### Q1: Amount of action data needed
As described in $\underline{\text{Sec 3.2}}$, we **do not use any action data during pre-training**. Leveraging more accessible, action-free video data reduces the cost of data collection. Although action-free robot videos are still more expensive than Internet videos, there has been significant progress in data sharing (e.g., BridgeData, RH20T, OXE), making robot data increasingly affordable. **What we truly need are unified frameworks like iVideoGPT to better utilize these**.
**Expensive action-conditioned data are only used for downstream tasks**. In $\underline{\text{Sec 4.4}}$, we have demonstrated the few-shot adaptation capabilities for both (action-free) video prediction (Fig 7, 8c) and MBRL (Fig 6). In addition, we have also adapted iVideoGPT with 1,000 action-conditioned BAIR trajectories, which achieves 82.3 FVD.
### Q2: Contribution of human videos
As detailed in Appendix A.2, we used a mixture of OXE and SthSth for pre-training the tokenizer, as visual diversity is the key to learning context and dynamics information separation. The transformer is pre-trained on OXE only, as we found many SthSth videos with random camera motions are hard to predict. These choices were based on our experience and a comprehensive analysis is expensive and left for future work.
To assess the contribution of human videos, we pretrain iVideoGPT with pure OXE data (taking almost a week) and evaluate on OXE validation data. We observe human videos indeed help pre-training.
| Pretrain data | FVD | PSNR | SSIM | LPIPS |
| ------------- | -------- | -------- | -------- | ------- |
| OXE only | 48.3 | 24.3 | 85.5 | 9.1 |
| OXE+SthSth | **40.5** | **24.8** | **86.3** | **8.5** |
### Q3: MBRL baseline with FitVid
Although FitVid is originally designed for video prediction and has not been used as world model for MBRL, we have implemented a baseline using FitVid, by replacing iVideoGPT in our implementation. To predict rewards, we add an MLP head on top of FitVid's latent state, parallel to the image decoder.
As shown in $\underline{\text{Fig 5 of attached PDF}}$, MBPO with iVideoGPT outperforms FitVid on 5/6 tasks and performs comparably on the remaining one. We also qualitatively observe that FitVid's imagined trajectories are blurrier (as shown in $\underline{\text{Fig 4 of attached PDF}}$) compared to ours, which hinders its ability to accurately simulate real environments and may hurt MBRL performance.
### Q4: Dreamer baseline with pre-training
As elaborated in $\underline{\text{Q1 of global response}}$, Dreamer, with its RNN architecture, lacks the scalability to benefit from pre-training on real-world videos [1]. To empirically validate this, we have used a pre-training algorithm, APV [1], to pre-train DreamerV3-XL (200M parameters, comparable to iVideoGPT) on the same pre-training data as ours, and then finetune it on Metaworld tasks.
The results in $\underline{\text{Fig 5 of attached PDF}}$, indicate that Dreamer almost does not benefit from pre-training. We also visualize DreamerV3's predictions after pre-training in $\underline{\text{Fig 2 of attached PDF}}$, which is of greatly lower quality compared to iVideoGPT, validating the limited capacity of Dreamer architecture.
[1] Reinforcement Learning with Action-Free Pre-Training. ICML 2022.
### Q5: Human study
As official pretrained models are not released for most baselines, we are only able to compare iVideoGPT with VideoGPT and MCVD on the action-free BAIR dataset. We generate videos using three models from the test set and ask users to label preferences between two randomly sampled videos, based on the physical naturalness and feasibility of robot-object interactions.
We collect 386 annotations from 9 persons. The results in $\underline{\text{Fig 6 of attached PDF}}$ demonstrate that iVideoGPT is more preferred by human annotators.
### Q6: Lack of real-robot experiments
We would like to clarify that, instead of being 'tested primarily on Metaworld in the embodied setup', our experiments in Sec. 4.2 (visual planning) and Sec. 4.3 (visual MBRL), although conducted in simulation, are **both embodied low-level control settings**.
We apologize that the limited time of rebuttal is insufficient for us to set up a real-robot experiment. We do not claim a contribution that our method is instantly ready for real-robot applications, which is left for future work, but we believe our method can help advance model-based methods in robotics research.
### Q7: Visual planning performance
We find that inconsistent performance in the VP2 benchmark can be primarily attributed to inaccurate built-in reward design of this benchmark. Please refer to Q4 in global response for details.
### Q8: "Small" evaluation suite
We respectfully disagree with "The evaluation suite is currently very small". In fact, **no baselines cover all three problem settings in our paper**. We hope additional results provided in this rebuttal can solidify our evaluations.
### Q9: Context-dynamics decomposition in tokenization
While we cannot guarantee in neural networks perfect decomposition and no information leakage, we aim to achieve this through a careful tokenizer design: a bottleneck with much fewer tokens compels to only capture necessary dynamics information for future frames and share contextual information with initial frames, to reconstruct raw pixels.
We visualize the decomposition property in $\underline{\text{Fig 1 in attached PDF}}$. The showcases illustrate that, by removing cross-attention from context frames into future frames, the decoder can still reconstruct a trajectory **moving the same way** as the normally reconstructed one but the **context is almost missing**.
---
Rebuttal 2:
Title: Required Action: Please Respond to the Author Rebuttal
Comment: Dear Reviewer HkK9,
As the Area Chair for NeurIPS 2024, I am writing to kindly request your attention to the authors' rebuttal for the paper you reviewed.
The authors have provided additional information and clarifications in response to the concerns raised in your initial review. Your insights and expertise are invaluable to our decision-making process, and we would greatly appreciate your assessment of whether the authors' rebuttal adequately addresses your questions or concerns.
Please review the rebuttal and provide feedback. Your continued engagement ensures a fair and thorough review process.
Thank you for your time and dedication to NeurIPS 2024.
Best regards,
Area Chair, NeurIPS 2024 | Rebuttal 1:
Rebuttal: We sincerely thank all reviewers for their constructive feedback. We have made every effort to address all concerns and have responded to individual reviews. We have also **answered common questions in this global response**.
Please note that **all the new figures for all responses are included in the PDF attachment**.
### Q1: Motivation of iVideoGPT
As discussed in Sec. 1 of the paper, iVideoGPT is **motivated by two distinct lines of research**:
- **World models** in RL serves as an **interactive** simulation of the real environments by modeling transitions. However, the dominant architectures are still based on RNNs (like Dreamer and MuZero), which have **limited scalability** compared to Transformers (see Fig 7 in [1]; here **scalability means an ability to increase capacity effectively with additional parameters**). Thus, current world models rarely do pre-training on large-scale real-world videos and struggle to benefit from it (see Fig 8c in [2] and Figs 2 \& 5 in attached PDF), leading to insufficient sample efficiency in model-based control.
- **Video generation models** (e.g. Sora) are recently advanced by **scalable** architecture and large-scale training. They share advantages with foundation models in other fields (like LLMs): strong performance with sufficient task data, fast adaptation with few-shot downstream samples, and zero-shot generalization on unseen domains. However, these models typically generate all video frames simultaneously, allowing text/action conditions only at the start, thus **lacking the ability for interaction** during generation.
To bridge both sides, our work is a framework where **scalable and interactive architecture can be pre-trained with large-scale data and then adapted into various downstream tasks**. Across three control-relevant settings, iVideoGPT demonstrates favorable properties, including **competitive task performance** (Tab 1, Figs 4\&6 in paper), **few-shot adaptation** (Fig 8a in paper), and **zero-shot generalization** (Fig 7 in paper).
While many future directions remain to be explored, we believe that our work as a valuable prototype towards large world models, will contribute to the community.
[1] Scaling laws for neural language models. 2020.
[2] Reinforcement Learning with Action-Free Pre-Training. ICML 2022.
### Q2: Tokenization Efficiency
In addition to scalable Transformer architecture, we introduce a new compressive tokenization. By exploiting temporal redundancy, a significantly fewer amount of tokens can **greatly save time and memory, allowing us to scale the model size with fewer costs**.
To prove the efficiency, we report the time and memory consumption of iVideoGPT with various tokenizers, showing that our method achieves a good balance between efficiency and quality:
- Pre-training the transformer (A100 with per-device batch size 16)
|Tokenizer|Speed (iters/sec)|Memory (GB)|
|-|-|-|
|4x4|3.10|10.6|
|16x16|N/A|**OOM**|
|Ours|2.62|22.3|
- Inference (RTX 4090 with batch size 1)
|Tokenizer|Tokenize (sec)|Generation (sec)|Detokenize (sec)| Memory (GB)|
|-|-|-|-|-|
|4x4|0.27|1.13|0.05|1.98|
|16x16|0.26|**22.5**|0.04|1.90|
|Ours|0.29|1.11|0.06|2.33|
### Q3: Difference with IRIS
We can summarize the following differences with IRIS:
1. **Pre-training and fine-tuning paradigm**: iVideoGPT is designed for a paradigm that involves pre-training on large-scale videos and fine-tuning on various downstream tasks (see Q1 above). In contrast, IRIS focuses solely on MBRL with Transformer-based world models trained from scratch in the Atari domain.
2. **Efficient tokenization**: iVideoGPT proposes novel compressive tokenization to significantly reduce the number of tokens, saving time and memory (see Q2 above), while IRIS uses per-frame tokenization.
3. **Flexible action-conditioning design**: iVideoGPT employs slot tokens with optional additive action embeddings to support both action-free pre-training and action-conditioned fine-tuning, while IRIS strictly treats discrete Atari actions as tokens.
4. **Off-policy MBRL implementation**: iVideoGPT uses an off-policy RL algorithm while IRIS performs on-policy learning. On-policy learning needs a large number of model rollouts, which, combined with inefficient tokenization, results in ~7 days for 100k-environment-step training. In comparison, iVideoGPT only need takes ~4 hours.
We will include this extended discussion in a future revision.
### Q4: Visual planning performance
We note that **no current model in the VP2 benchmark consistently outperforms other models across all tasks**, and iVideoGPT is no exception. We conducted these visual planning experiments, aiming to show that our model can effectively handle various settings with competitive performance.
When analyzing the inconsistent performance on this benchmark, we primarily attribute it to **imperfect built-in reward designs of VP2**. In this benchmark, scores for sampled actions are mainly estimated by a learned classifier of task success based on model-predicted frames. This classifier, trained by VP2 authors, appears to lack robustness and can be easily fooled by out-of-distribution (OOD) inputs, assigning high rewards to trajectories that are less likely to succeed (see examples in Fig 3 of attached PDF). This imperfect reward function likely contributes to the mixed results observed on this benchmark (with iVideoGPT even outperforming the oracle simulator in one task). Addressing visual planning with imperfect rewards is another independent research problem, out of this paper's scope. We also find our model can produce incorrect predictions for severely OOD actions, likely due to narrow training data. We will include this discussion in the limitation section.
Pdf: /pdf/de46de92060230a55e9245f408753fa2ef7c0590.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
On the Complexity of Teaching a Family of Linear Behavior Cloning Learners | Accept (poster) | Summary: Summary:
This paper "Optimal Teaching of Linear Behavior Cloning Learners" introduces a novel algorithm called TIE (Teach using Iterative Elimination) aimed at efficiently teaching a family of consistent linear behavior cloning (BC) learners. The algorithm focuses on constructing an optimal teaching set from a fixed set of states, demonstrating actions that induce the target policy across all members of the learner family. The approach is evaluated across various environments, demonstrating its effectiveness in producing near-optimal teaching sets compared to baseline methods.
Strengths:
Algorithmic Innovation:
TIE introduces a systematic approach to teaching linear BC learners by iteratively eliminating states that are not necessary for inducing the target policy. This iterative elimination based on covering extreme rays of the version space cone simplifies the otherwise complex problem of optimal teaching.
Theoretical Guarantees:
The paper provides theoretical guarantees, showing that TIE achieves near-optimal teaching dimension, particularly noting its efficiency in generating teaching sets that cover the version space efficiently. This is supported by proofs and corollaries that establish the algorithm's effectiveness under different settings.
Empirical Validation:
The effectiveness of TIE is demonstrated across diverse environments, including maze navigation and coding tasks. It consistently outperforms baseline methods such as Teach-All and Teach-Random, illustrating its practical applicability and superiority in real-world scenarios.
Weaknesses:
Computational Complexity:
While the paper discusses the efficiency of TIE in generating teaching sets, the computational complexity may increase significantly with larger state and action spaces. The approach relies on solving set cover problems, which can be NP-hard, especially as the size of the action space increases.
Assumption of Known Features:
The algorithm assumes knowledge of feature representations and their effectiveness in inducing the target policy. This may limit its applicability in cases where feature representations are not well-defined or when the learner's policy cannot be fully captured by the chosen features.
Generalization to Non-linear Learners:
The focus on linear BC learners limits the generalizability of the proposed method to more complex learners such as neural networks or non-linear models. Extending the approach to these settings would require significant adaptation and validation.
Insignificant Community Contribution:
This work assumes that learners possess consistent properties, which is an overly stringent assumption imposing strict constraints on the hypothesis class. In Section 3.1, while the main contribution is outlined, it lacks a rigorous analysis, failing to obtain the necessary approximation loss. Regarding Section 3.2, the contributions appear negligible; the theorems and lemmas presented therein do not lead to significant conclusions.
Strengths: Strengths:
Algorithmic Innovation:
TIE introduces a systematic approach to teaching linear BC learners by iteratively eliminating states that are not necessary for inducing the target policy. This iterative elimination based on covering extreme rays of the version space cone simplifies the otherwise complex problem of optimal teaching.
Theoretical Guarantees:
The paper provides theoretical guarantees, showing that TIE achieves near-optimal teaching dimension, particularly noting its efficiency in generating teaching sets that cover the version space efficiently. This is supported by proofs and corollaries that establish the algorithm's effectiveness under different settings.
Empirical Validation:
The effectiveness of TIE is demonstrated across diverse environments, including maze navigation and coding tasks. It consistently outperforms baseline methods such as Teach-All and Teach-Random, illustrating its practical applicability and superiority in real-world scenarios.
Weaknesses: Weaknesses:
Computational Complexity:
While the paper discusses the efficiency of TIE in generating teaching sets, the computational complexity may increase significantly with larger state and action spaces. The approach relies on solving set cover problems, which can be NP-hard, especially as the size of the action space increases.
Assumption of Known Features:
The algorithm assumes knowledge of feature representations and their effectiveness in inducing the target policy. This may limit its applicability in cases where feature representations are not well-defined or when the learner's policy cannot be fully captured by the chosen features.
Generalization to Non-linear Learners:
The focus on linear BC learners limits the generalizability of the proposed method to more complex learners such as neural networks or non-linear models. Extending the approach to these settings would require significant adaptation and validation.
Insignificant Community Contribution:
This work assumes that learners possess consistent properties, which is an overly stringent assumption imposing strict constraints on the hypothesis class. In Section 3.1, while the main contribution is outlined, it lacks a rigorous analysis, failing to obtain the necessary approximation loss. Regarding Section 3.2, the contributions appear negligible; the theorems and lemmas presented therein do not lead to significant conclusions.
Technical Quality: 2
Clarity: 2
Questions for Authors: How to control the approximation teaching loss? Any safety guarantees?
Confidence: 5
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: This work assumes that learners possess consistent properties, which is an overly stringent assumption imposing strict constraints on the hypothesis class.
The recent contributions to the machine teaching community have been overlooked. Please consider including more surveys from recent conferences.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their valuable feedback and suggestions. Please find our replies to your individual raised concerns below.
### **1. Regarding the setting where the feature function is not well defined.**
Thank you for your feedback. We believe your comment may be referring to the scenario when the teacher does not know the exact feature function of the learner. If this is not what you meant, we would greatly appreciate further clarification so that we can appropriately address your concerns.
Assuming you meant the former, we note that in a benevolent teaching setting, it is considered safe for the teacher to know the feature function of the learner. Indeed, several prior works in machine teaching like [6],[7],[8] have made this assumption. However, we acknowledge that it would be an interesting direction to study if the teacher can itself learn about the feature function of the learner but this is beyond the scope of this work and we leave it to the future works to address this setting.
### **2. When learner's feature function and corresponding approximation guarantee.**
We acknowledge that realizability can be a strict requirement for the learner and appreciate your concern about it. Indeed it would be interesting to also consider approximate teaching goals where the teacher is required to teach a class optimal policy(best policy in its hypothesis class) to the learner.
In fact it is quite easy to extend our work to the setting of teaching approximately optimal policy to the learner i.e. $\pi^* \leftarrow \arg\max_{\pi \in \Pi(w), w \in \mathbb R^d} V^{\pi_w}_\mu$. Since the class optimal policy is realizable wrt the feature function $\phi$, our results and analysis would apply to teaching the approximate optimal policy as well. However, we note that the teacher would eventually suffer a finite approximation error which cannot be avoided due to the bias of the learner’s hypothesis class.
### **3. Clarifying the relevance of theorem and lemmas in section 3.2 to optimal teaching.**
We respectfully disagree with your observation that theorems and lemmas in section 3.2 does not lead to any conclusions. In fact, the lemma 2 is crucial to giving us a handle to solve the seemingly difficult infinite set covering problem that comes up when attempting to solve the problem naively using prior method of inconsistent hypothesis elimination.
We note that section 3.1 just aims to highlight the difficulty of solving our optimal teaching problem by naively using the method of inconsistent hypothesis elimination used in prior works [3]. Since our universe and cover subsets are uncountable, it leads to a non-trivial uncountable set covering problem that cannot be solved by prior methods.
Our main contribution lies in showing that this problem of covering/eliminating the uncountable set can ultimately be reduced to covering the finite extreme rays of the induced cone(as shown in Lemma 2). This motivates our algorithm TIE which tries to first find the extreme rays of the cone and then cover it using the smallest subset of finite states. We later prove that our exact algorithm TIE is NP-hard but also provide an efficient greedy version of TIE that comes with $\log(|A|-1)$ approximation guarantee.
### **4. Regarding missing recent related works and surveys.**
We appreciate your constructive comment to include more recent related works in machine teaching in our paper. On our side, we have taken care to cite all recent works specifically on teaching linear learners, version space learners, and RL learners that are relevant to our paper.
However, we recognize that despite our diligence, some of the important papers might have been inadvertently missed. We would greatly appreciate it if you could point us to any specific paper or surveys you have in mind and we will be happy to include it in our related works section. We thank you for your help to enhance the quality of our paper.
### **5. Generalization to non-linear consistent neural network learners.**
We thank you for your suggestion to generalize our setting to teaching more complex learners like neural networks. Indeed, it would be an interesting direction to extend our work to fully trainable neural networks(NN). However, we note that quantifying the version space of NN learners in itself is a challenging problem, and is beyond the scope of our linear learning.
Nevertheless, we would like to mention a simplified transfer learning setting(where all except the last layers of the NN are fixed and have been very popular for large models recently), where our teaching algorithm can be directly applied to teaching consistent NN learners using latent feature representation.
More concretely, one can treat the pre-trained network(excluding the last layer) as feature representation function $\phi$ in our setting and apply our TIE algorithm of the induced latent feature space. Since our primary focus was on studying optimal teaching for consistent linear learners, we did not mention this simple extension in our work but would be happy to include a subsection in the appendix if you think highlighting this connection would be helpful to a broader audience.
### **6. Regarding consistency property being an overly stringent assumption.**
It would be indeed interesting to consider a scenario where the learner cannot perfectly fit the demonstrated data. However, recent works [1],[2] on over-parameterized models have shown that overfitting data may not be a bad thing(as was thought previously) and leads to improved generalization performance.
Furthermore, in thoery of machine teaching, several past works [3], [4], [5] have made this assumption. So, we believe its a good first step towards our goal of understanding optimal teaching of consistent linear learners and leave it to future works to address inconsistent learners.
---
Rebuttal Comment 1.1:
Title: Thank you for the rebuttal
Comment: Thank you for the rebuttal. After reviewing the other comments, I share the concern regarding the strict assumptions in the work. Although the authors have cited some publications to argue that their assumptions are more relaxed, I do not find this evidence convincing enough.
---
Rebuttal 2:
Comment: We hope our responses satisfactorily addressed your concerns. We look forward to your further suggestions and final evaluation. Thanks again for your time and consideration!
---
References :
[1] "Reconciling Modern Machine Learning and the Bias-Variance Trade-off" by Belkin et al. (PNAS 2019)
[2] "Understanding Deep Learning Requires Rethinking Generalization" by Zhang et al. (ICLR 2017)
[3] "On the complexity of teaching" by Goldman et. al.
[4] "The Teaching Dimension of Linear Learners" by Liu et. al. (JMLR 2017)
[5] "The Teaching Dimension of Kernel Perceptron" by Kumar et. al. (AISTATS 2021)
[6]. "Interactive Teaching Algorithms for Inverse Reinforcement Learning" by Kamalaruban et. al. (IJCAI 2019)
[7]. "Teaching Inverse Reinforcement Learners via Features and Demonstrations" by Haug et. al. (NeurIPS 2018)
[8]. "Teaching Multiple Inverse Reinforcement Learners" by Melo et. al. (Frontiers in AI 2021)
---
Rebuttal 3:
Comment: Thank you for your reply! We have made efforts to address the main concerns that you and other reviewers have raised in the feedback.
Specifically,
1. We addressed realizability by defining an approximate teaching objective and requiring the teacher to teach an approximately optimal policy.
2. For consistency assumption, we would like to clarify that several well known algorithms like SVM and perceptrons are consistent learners. Prior works on machine teaching [4], [5] have studied optimal teaching of these consistent learners seperately while our work studies joint teaching of all consistent learners. Thus, with regards to consistency, our works should be seen as an improvement over prior works and not a limitation. Furthermore, consistency should not be considered as a strict assumption as it includes several well known algorithms like SVM, perceptron, version space learners etc.
We sincerely hope that you will consider our points and take them into account during the final evaluation. If you have any additional specific concerns, please let us know. Thank you once again for your time and consideration! | Summary: This paper studies optimal teaching of behavior cloning learner with a linear hypothesis class, that is, finding the minimum number of demonstrations needed to teach a target policy to the entire family of consistent linear BC learner. They first show that this problem can be transformed into a finite set-cover problem, which is NP-hard when the size of action space is greater than $2$. They also propose an efficient approximately optimal algorithm such that the size of dataset is at most $\log(|A|-1)$ times that of the optimal one. Finally, they perform empirical experiments to show that their algorithm finds a much smaller teaching set compared to using all data and randomly selecting a subset of data.
Strengths: This paper studies a setting of potentially great importance in practice: finding the smallest dataset that can teach a family of BC learner to fit the optimal policy. Such an algorithm would be useful when there is a large dataset and the horizon is long. The results presented in this paper is elegant and initial experimental results seem promising. The paper is also easy to follow with clear notations.
Weaknesses: 1. It is a bit unclear to me why being able to teach a family of learners is preferable compared to teaching a specific type of learner. The author gives a motivating example on teaching a whole class of students, but I am still not sure whether there are any real application scenario in machine learning that can motivate such a setting.
2. Although the approximately optimal algorithm runs in polynomial time, the time complexity still scales up quickly w.r.t the number of states and actions and does not seem to be practical.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Perfectly fitting the learner to the training set is not a common practice. Does it have any implication on the validation accuracy?
2. Can you obtain any meaning results without the realizability assumption?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Nothing necessary stands out.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their valuable feedback and suggestions. Please find our replies to your individual raised concerns below.
### **1. Why teaching a family instead of individual learners - any real scenario in ml?**
We appreciate your thoughtful question. In a real life scenario of teaching a population of learners, each one can have their own bias for choosing one consistent hypothesis over the other which are often implicitly induced by their loss function. Designing an optimal teaching dataset for each of them would not only require algorithmic novelty but it would also be computationally expensive for the teacher. However, if all the learners are consistent, our teacher can easily construct a single teaching dataset to teach them all.
For an example, consider the following teach-by-demonstration task in a robotic setting. In a population of users, every user trains their own robot to assist them in home cleaning task by providing demonstrations to them. Each user may likely use a different algorithm to train their robot.
From a security point of view, it’s more safe for the users to accept training data as they can evaluate the labels directly. On the other hand, accepting a trained model opens the possibility of hidden backdoors. In that scenario, assuming they all use consistent linear learners, our teacher can construct a small demonstration dataset and provide it to all the users so that they can train their algorithm to the correct behavior policy.
### **2. Time complexity scales quickly with state size.**
We understand your concern and would like to note that $n$ is not a natural parameter for the diamond game example. Instead all possible configurations in which objects could appear on the board, i.e., the state space is a natural parameter to quantify the complexity of the problem. In RL literature, a poly-time algorithm in $|S|, |A|$ space is considered efficient [1]. On the other hand, expecting a sublinear complexity would raise the following question : how can one expect to find the most optimal dataset if they are not even allowed to enumerate all input points?
Nevertheless, the exponential reduction in teaching set size from $|S|$ to $\log(|S|)$ is of great importance in any real-life setting as it can significantly speed up the training process of the learner.
Furthermore, we note that computing an optimal teaching set is only a one-time cost. Once the teacher computes it, it can use the same teaching set to teach any consistent learner. On the other hand, prior works require computing an optimal teaching set for each learner which can be more computationally expensive for teaching a class of diverse but consistent learners.
### **3. Imperfectly fitting the training dataset for the learner.**
It would indeed be interesting to consider learners that cannot perfectly fit the demonstrated data. However, recent works like [1],[2] on over-parameterized models have suggested that overfitting training data perfectly need not be a bad thing(as was thought previously) and it leads to improved generalization error.
Furthermore, in the context of machine teaching, several past works like [3], [4], [5] have made this assumption. So, we believe it is a reasonable assumption to start with and leave it to future works to address.
### **4. Meaningful results without realizability assumption.**
Indeed, we can extend our setup to an agnostic teaching setting, where the learner's (policy) function class need not contain the globally optimal policy. In that scenario, the teacher will aim to teach the class (approximately) optimal policy to the learner i.e. $\pi^* \leftarrow \arg\max_{\pi \in \Pi(w), w \in \mathbb R^d} V^{\pi_w}_\mu$.
We note that the teacher would eventually have to suffer a finite approximation error which cannot be avoided due to the bias induced by the learner's restricted feature function and the associated linear hypothesis class. Moreover, since the class optimal policy is realizable wrt the feature function $\phi$, our results and analysis would apply to teaching this policy as well.
We hope our responses satisfactorily answered your questions. Should you have any follow up questions, please let us know. Thank you for your time and consideration!
---
References :
[1] "Reconciling Modern Machine Learning and the Bias-Variance Trade-off" by Belkin et al. (PNAS 2019)
[2] "Understanding Deep Learning Requires Rethinking Generalization" by Zhang et al. (ICLR 2017)
[3] "On the complexity of teaching" by Goldman et. al.
[4] "The Teaching Dimension of Linear Learners" by Liu et. al. (JMLR 2017)
[5] "The Teaching Dimension of Kernel Perceptron" by Kumar et. al. (AISTATS 2021) | Summary: The authors propose a method for determining the a minimal dataset of state-actions tuples that would allow a family of linear learning agents to learn to imitate the optimal policy of a teacher. The authors motivate their method by describing the desired set of all linear weights that lead to an optimal policy through the difference in feature vectors between optimal and non-optimal features, $w^\top (\psi(s,\pi^*(a)) - \psi(s,b)) > 0$, and noticing that each datapoint eliminates a halfspace, $\lbrace w : w^\top (\psi(s,\pi^*(a)) - \psi(s,b))\rbrace$. They show that this covering problem is equivalent to covering the "extreme rays" of the cone of the differences in feature, $\psi(s,\pi^*(a)) - \psi(s,b)$. Once the extreme rays are found, the states to include in the dataset are found by finding the minimal set of states that cover them through the solving of a finite cover set problem.
The authors provide an analysis of their method showing that it is optimal. Additionally, they show that this problem is NP-hard and provide an approximate algorithm for finding near-minimal datasets. Finally, the authors show their method working in a "pick the right diamond" game, a maze navigation programming problem, and polygon tower environment.
Strengths: The paper is well written and structured.
The theoretical results are conclusive and provide an interesting perspective on the problem.
Weaknesses: I do not have any significant criticism.
### minor comments and typos
Line 67, if $\mathcal{L}: D \to 2^\mathcal{H}$ and $\mathcal{H} = \mathbb{R}^d$, what does $2^{\mathbb{R}^d}$ mean?
Line 67, $\mathcal{L}(D)$ is assigned the argmin of a $w$ and $\pi$, feels like there are some type mismatch here.
Line 163, "an ray" -> "a ray".
Technical Quality: 4
Clarity: 4
Questions for Authors: Line 63, what is the meaning of $\Delta$?
Corollary 5, how strong are these assumptions? When would they realistically hold?
Confidence: 2
Soundness: 4
Presentation: 4
Contribution: 3
Limitations: No issues
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their appreciation of our work and their valuable feedback. Please find our responses to your individual comments below.
### **1. Clarification on notations and typos.**
- The set $2^{𝑅^𝑑}$ denotes the set of all possible subsets of $\mathbb R^d$ space. The learner's version space belongs to this set.
- $L(D)$ denotes all possible consistent policies that minimize the empirical risk of the learner. Switching the ordering of $w$ and $\pi$, i.e., the learner first chooses a consistent $w$ and then selects any policy induced by this $w$ fixes this issue.
- $\Delta$ denotes the simplex over finite input set. In this case, it denotes all stochastic policies over the argmax action set.
- We thank you for pointing out the typos! We will fix them in the updated version.
### **2. How strong is the finite extreme ray assumption in the infinite state setting in corollary 5.**
We appeciate your attention to this point. In practical settings, it is not hard to find examples where this assumption holds true. For example, consider the following environment (with continuous state space) where an agent moves in a circle with a fixed center and radius in a clockwise direction. The state of the agent is defined by a continuous angle value in $[0,2\pi)$.
Now, consider a feature function that maps the angle into one of four quadrants represented by four one-hot vectors in $\mathbb R^4$. Note that even though the state space is infinite the induced feature space and correspondingly the extreme rays are finite in size and thus our assumption holds true.
We hope our responses satisfactorily answered your questions. Thank you for your time and consideration. | Summary: The problem being considered is how to provide demonstrations that are useful to a family of consistent behavior cloning (BC) learners, where consistency means that the learner produces a policy consistent with the dataset. The authors study what is the smallest dataset required to teach a family of consistent linear BC learners. The authors characterizes this problem and provides an algorithm called TIE that finds an $\log(A)$ approximate teaching set. Finally there is a demonstration on a visual programming problem.
Strengths: 1. For the most part, the paper is written in a clear and instructive way. Overall, the paper introduces a new setting (teaching a family of consistent linear learners) and provides a reasonably extensive treatment of the setting, including proving sufficient & necessary conditions for solutions, proving that it is NP-hard, providing an approximation algorithm.
Weaknesses: 1. The empirical baselines that are considered seem rather weak. Teach-All is just brute-force (which as mentioned scales exponentially with n) and Teach-Random is just looking at states randomly. Are there better baselines for this task? Since you could reduce to set cover, can you also try popular approximation algorithms for set cover? In particular, do they directly work for this problem, and if so can you provide empirical / theoretical comparisons to TIE?
1b. Related to the above, the experimental setting also seems a bit lacking. The problem is of very small scale where the optimal solution of this NP-hard problem can be computed. The paper would be much better motivated if there was an experiment that demonstrated the significance of TIE and the problem formulation being studied.
2. Cor 8 seems to be the crux of the paper since it's claimed to be a practical alg for this NP-hard problem. However, it is hastily stated and not discussed in detail at all.
3. The computational complexity of TIE is $(SA)^3$ (line 218) which seems quite unscalable. For example, in the Diamond example, the state space grows exponentially with n, so any poly dependence on $S$ is bad. In other words, while TIE might be able to find a very small dataset for teaching that doesn't grow exponentially with n, it's still prohibitive if TIE itself grows exponentially with n.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. $\Delta$ is undefined on line 63.
2. While I'm not very familiar with the literature for optimal teaching via BC, requiring consistent learners to incur zero risk seems rather stringent. It essentially requires $\pi^\star$ to lie in the policy class, which is often not the case as the training error is almost never zero in practice. Can this be relaxed to a more realistic (maybe agnostic) setting?
3. The authors claim that SVM, perceptron and logistic regression are specific instances of this paper's formulation. I wonder if an experiment can compare TIE vs. algorithms specific to SVM, perception, or logistic regression? Since TIE is supposed to be general, how much performance degradation do we see?
4. I'm a bit confused by Thm 4 and Cor 8. On one hand, Thm 4 claims that TIE achieves teaching dimension. On the other hand, Cor 8 says TIE finds an approximate solution which contradicts Thm 4. Are they referring to two instantiations of TIE? Is TIE referring to the exact exponential-time alg or is TIE referring to the approximation alg?
Minor typos:
1. Extra "like" on line 294
2. mulitple on line 290
Confidence: 2
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: No. The checklist says that limitations are discussed in the last section of the paper, but I did not find such a discussion anywhere. Can the authors please discuss their limitations in detail?
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their valuable feedback and suggestions. Please find our replies to your individual raised concerns below.
### **1. Baselines are rather weak; provide comparison with other baselines for set cover.**
We do agree that Teach-ALL is a weak baseline. However we note that Teach-Random is not that weak as it utilizes our crucial insight to cover infinite hypothesis space by just covering finite extreme rays as stated in Lemma 2.
We thank you for your suggestion to try other algorithms for the set cover subproblem. To that end, we implemented three other approximation algorithms of set cover: 1. relaxed LP method, 2. randomized rounding method, and 3. primal-dual method.
We produced the teaching set size result (as in experiment 5.b.) by substituting greedy algorithms with these candidates and found that they performed equally well as greedy. This is not surprising since these algorithms have similar logarithmic approximation guarantees like the greedy algorithm. We recall that they all still use our insight to cover finite extreme rays as given by Lemma 2.
### **2. Clarifying difference between Theorem 4 and Corollary 8.**
We apologize for the confusion caused due to the wording of corollary 8. We intended the corollary 8 to be read with preceding statements and would like to clarify the differences below. We will also add this clarification in the updated version of the paper.
First, we note that our algorithm TIE has two parts : 1.) finding all extreme rays of the cone and 2.) covering these extreme rays by a minimal set of states. Solving first part is computationally efficient in relevant parameter i.e. $|S|, |A|$ as it involves solving a sequence of linear program, while the second part is a finite set-cover problem which is known to be a NP-hard problem.
Theorem 4 comments on the teaching sample complexity also known as Teaching Dimension achieved by our Algorithm 1 i.e. TIE produces an optimal teaching size for any instance of the problem. However, as we note this algorithm is not computationally efficient as it involves solving a NP-hard set cover problem in line 5 of OptimalTeach procedure of Algorithm 1. Secondly, in Theorem 7, we prove that in general one cannot avoid this hardness by showing that our problem is indeed NP-hard.
Finally, we propose a computation efficient and approximately optimal algorithm by using a greedy solver for the set cover subproblem in line 5 of OptimalTeach procedure. We can call this algorithm Greedy-TIE which achieves an approximation ratio of $\log(|\mathcal A|-1)$ on optimal teaching size as stated in Corollary 8.
The approximation result follows straightforwardly from the following known results in approximation of the set cover problem: "Consider a set cover problem where universe set is of size $n$ and each cover subset is of size at most $m$, then the greedy algorithm achieves an approximation ratio of $\log(m)$ on the size of the optimal cover" [2]. In our case, since the size of each cover subset is at most $A-1$, the corollary follows.
### **3. Is it essential to require optimal policy to be in the learner's policy class? Can we extend it to agnostic setting?**
This is a great question. Indeed, we can extend our setup to an agnostic teaching setting, where the learner's (policy) function class need not contain the globally optimal policy. In that scenario, the teacher will aim to teach the class (approximately) optimal policy to the learner i.e. $\pi^* \leftarrow \arg\max_{\pi \in \Pi(w), w \in \mathbb R^d} V^{\pi_w}_\mu$.
We note that the teacher would eventually have to suffer a finite approximation error which cannot be avoided due to the bias induced by the learner's restricted feature function and the associated linear hypothesis class. Moreover, since the class optimal policy is realizable wrt the feature function $\phi$, our results and analysis would apply to teaching this policy as well.
### **4. Compare to learner-specific optimal teaching**
Most prior works on optimal teaching of specific learners are limited to a relatively 'easier' constructive setting where the teacher can construct any covariate/feature vector in $\mathbb R^d$. On the other hand, our setting is non-constructive where the teacher can only use the feature vectors induced by (state, action) tuple leading to a difficult combinatorial optimization problem that is not present in a constructive setting.
Due to this reason, the optimal teaching algorithms proposed by prior works would fail to even produce a valid teaching set(they would only produce covariate vectors rather than actual states to teach) and thus cannot be applied to our setting.
### **5. Regarding computational complexity**
We understand your concern and would like to note that $n$ is not a natural parameter for the diamond game example. Instead the set of all possible configurations in which objects could appear on the board, i.e., the state space is a natural parameter for the problem. In RL literature, a poly-time algorithm in $|S|, |A|$ space is considered efficient [1]. On the other hand, expecting a sublinear complexity would raise the following question : how can one expect to find the most optimal dataset if they are not even allowed to enumerate all input points?
Nevertheless, the exponential reduction in teaching set size from $|S|$ to $\log(|S|)$ is of great importance in any real-life setting as it can significantly speed up the training process of the learner.
Furthermore, we note that computing an optimal teaching set is only a one-time cost. Once the teacher computes it, it can use the same teaching set to teach any consistent learner. On the other hand, prior works require computing an optimal teaching set for each learner which can be more computationally expensive for teaching a class of diverse consistent learners.
---
Rebuttal 2:
Comment: ### **6. Regarding limitations of our work.**
We mentioned a few limitations of our work in the discussion section and would like to state them below along with some additional points. We would include these limitations in the updated version of the paper.
1. Our formulation only works for linear version space learners and cannot handle more complex non-linear learners with the neural network hypothesis class.
2. Our problem setup makes a realizability assumption which (as you noted) is not very satisfactory. This can be resolved by considering agnostic teaching as described in point 3. We can include this setting directly in the paper instead of mentioning it as a limitation.
3. Our framework cannot tolerate errors in labeling from the teacher.
To handle this, one would eventually require a robust learner(possibly using robust statistics) which would be an interesting future direction to explore.
### **7. Notation clarification.**
$\Delta$ denotes the simplex over finite input values. In this case, it denotes all stochastic policies over the argmax action set.
### **8. Demonstrate the significance of TIE.**
We note that the significance of our algorithm lies in both parts mentioned in point 2. As we have noted in point 5, our optimal teaching algorithm reduces the teaching set size from $|S|$(by Teach-ALL) to $\log(|S|)$(by TIE) in case of diamond example which is a significant cost saving from the perspective of training any consistent linear learner.
We appreciate your detailed feedback and hope that our responses have adequately addressed your concerns. We look forward to your final evaluation. Thank you for your time and consideration!
---
References :
[1] [Reinforcement Learning : Theory and Algorithms](https://rltheorybook.github.io/rltheorybook_AJKS.pdf) by Agarwal et. al.
[2] [Approximation Set Cover](https://www.cs.dartmouth.edu/~ac/Teach/CS105-Winter05/Notes/wan-ba-notes.pdf) by Wan et. al.
---
Rebuttal Comment 2.1:
Comment: Thanks for the detailed response. My confusion about Theorem 4 and Cor 8 is cleared -- the authors should revise the paper to make this clear since the "Greedy-Tie" point is not obvious from the current text. I also appreciate the helpful clarifications regarding my other questions. For this I increase my score from 4 to 5.
However, my concern with point 5 remains: the algorithm's compute scales cubically with $S$ (exponential in $n$) for the diamond example, which seems prohibitive -- since the diamond problem can be described by a string of length of order $\log S$, this is indeed exponential running time.
---
Rebuttal 3:
Comment: Thank you for your updated feedback! We're glad that our response have helped to resolve some of your concerns. We will include these points in the updated version of the paper to make things clear.
We appreciate your concern about time complexity issue and would like to clarify things further to address this point. First, we note that an instance of our teaching problem is defined by the tuple $(S, A, \phi, \pi^*)$. While we can encode the state space using $\log(S)$ bits (as you noted), encoding that alone is not sufficient to completely specify the problem instance as we still need to specify the feature function and the optimal policy.
We note that,
- specifying the feature for each $(s,a) \in S \times A$ requires a constant number of bits(for simplicity we can assume features are binary(0/1 valued), so we need just one bit). Thus, to completely specify the feature function $\phi : S \times A \rightarrow \\{0,1\\}$ we will need $|S||A|$ bits.
- specifying the target optimal action for each $s \in S$, we need $\log(|A|)$ bits as there are $|A|$ possible actions. Thus, we need $O(|S|)$ space to specify the target policy $\pi^*$.
Overall, we see that we need $O(|S||A|)$ bits to completely specify a problem instance. And therefore, our algorithm has cubic time complexity in input size (i.e. $|S||A|$) and so it is an efficient algorithm. We hope this clarifies your concern about time complexity. We will include this explanation in the text if it helps to clarify things further. Please let us know if you have any further questions or feedback and we look forward to your final evaluation. Thank you for your time and consideration! | null | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
A Method for Evaluating Hyperparameter Sensitivity in Reinforcement Learning | Accept (poster) | Summary: The authors define a environment-parameter sensitivity metric in terms of a Jensen gap of min-max scaled performance. The gap gives us the difference between the best average performance and the average best performance over a particular algorithms hyperparameters over a set of environments. The authors then provide a 2D visualization where they plot this sensitivity gap against the average best-performance. This gives an intuitive way to classify whether a change in parametersensitivity can be outweighed by its (positive) change in performance.
The authors run experiments for PPO on brax to illustrate their method.
At the end, the authors also use their performance metric to tune subsets of the total parameter set.
Strengths: The definition for the environmnent-dependent parameter sensitivity is intuitive through its use of Jensen's gap. I agree that this is a sound method to evaluate robustness of parameter-tuning. Although, care must be taken when choosing the environment-sets.
Figure 3 and the discussion in 3.2 is excellently presented, and gives an intuitive way to think about what patterns we would like to see when performing parameter ablations.
The experimental setup is repeated over many repetitions and use appropriate statistical methods, when discussing the results, the authors discuss overlapping intervals and are decently careful about making conclusions.
Weaknesses: ### Summary
Although this paper shows a nice way to visualize robustness of parameter optimization methods across sets of environments, the severe computation requirements needed to create these visualizations on top of the shallow experimental setup that the authors performed, makes it unlikely that the current results will prove useful to other practitioners (aside from the isolated case applying PPO to Brax).
The chosen metric for performance (AUC) is incomplete when considering how RL is often applied (when only caring about final performance), and arguably estimated using a non-robust bound estimator. Furthermore, the authors only test on 5 Brax mujoco environments when comparing extensions that originate from Atari, DMLab, Minecraft or elsewhere. The authors, do not take characteristics of the environments they test on into account when discussing results. As a consequence, the conclusions we can take from the presented results are quite limited.
Therefore I vote for reject.
### Major comments
- The single choice of AUC metric is poorly motivated, although it is a relevant metric, in RL we are also often only interested in **final performance**. So, the AUC gives a proxy for cumulative regret, I also want to see a proxy for simple regret, the best-run within a algorithm's training cycle.
- The authors argue that we need robust estimators of performance for hyperparameter sensitivity. So, I am worried about the choice for min-max normalization of the algorithms' performance, see Eq. 1). These can be extremely high-variance estimators for the performance bounds, and therefore, aren't really robust (perhaps explaining why the authors needed 200 seeds per ablation)... I understand that this always nicely scales results into $[0, 1]$, however, if we rerun the experiments (different seeds and parameter-sets), would Figures 1 and 2 give similar patterns? Would this change our conclusion about what choice of hyperparameter is better in terms of environment robustness? Why not choose for an interquantile range or even a $[p, 1-p]$ percentile range that has guaranteed lower asymptotic variance?
- The authors don't discuss the sensitivity/ robustness to the choice of environment sets for Eq.2). I.e., what characteristics do the environments have that we test robustness for. For this reason, I also think that the discussion in 4.3 is a bit shallow considering the choice of Mujoco environments. The rewards in Brax are relatively well scaled and behaved (at least compared to e.g., a game like 2048 which has exponential rewards), so concluding that the value target symlog is not beneficial (on top of my comment about min-max scaling) should be nuanced and discussed appropriately.
- It's not clear from Eq.5) (and the whole of section 5) how we decide what reference point to take when deciding which parameters to tune. I.e., when we choose to keep some parameters fixed, what should their values be? Are the defaults by definition robust or not?
### Technical comments
- Definition of normalized environment score $\Gamma$ is wrong, why is the action-space $A$ used here? (see section 2). Why is the symbol for environment sets not consistent, why use $E$ instead of $\mathcal{E}$. In section 5 suddenly $\Omega$ is used again in place of $A$.
### Minor comments
- Eq. 2) Why not write the sum as an expectation, $\mathbb{E}[\max_h \Gamma(...) | e \sim {p(e)}] - \max_h \mathbb{E}_{p(e)} \Gamma(...)$. I.e., the more recognizable Jensen gap.
- Shortly discuss expected behaviour of Eq.2 and what it means, i.e., a value of 1 means a large gap, meaning not robust, and a value of 0 means no gap, meaning very robust.
- Line 220, minibatch advantage normalization is not that important for PPO, even the blog-post that the authors cite here says so.
- What ranges for the parameters were chosen in section 4.1 ? It's easy to break an algorithm through a bad or uninformed choice for the learning rates... It's also not really clear from the text how the authors created the algorithm configurations to test, i.e., random-search? an exhaustive grid? (Found this in Appendix D, the ranges for the values are OK when speaking from experience, and the authors used exhaustive grid-search; this **must** be included in the main text)
- Line 291, "It may ... be gained." This sentence is broken, maybe rephrase: "By finetuning a few important parameters for each specific environment, you can unlock most of an algorithm's best performance."
Technical Quality: 2
Clarity: 3
Questions for Authors: - Could the authors change the bound calculation (e.g., IQR or something similar), this should be an easy local change, and add the best-performing run metric (i.e., have the AUC and the best-run per algorithm side-by-side).
- Could the authors improve on the discussion and conclusions of the results, discussing the characteristics of the environment and how the choice of parameters hypothetically covaries with this.
Confidence: 4
Soundness: 2
Presentation: 3
Contribution: 3
Limitations: Section 6 shortly discuss limitations, however, the actual limitations are discussed in appendix D. The shallow experimental setup that was performed resulted in 7 GPU years on modern NVIDIA GPUs. Since most code was implemented in Jax and the algorithm used was PPO, one of the most light-weight RL algorithms currently out there, the current framework does not offer much headroom for testing other methods.
The severe computation costs are not included in the broader impact statement. The authors do state that reducing carbon footprint of DRL experiments are important, however, the presented methodology is not carbon friendly. If, the authors presented a better experimental setup, then this would alleviate the need for other practitioners from running these types of sensitivity analyses for PPO. However, this is currently not the case. Another way to improve the current setup is to use variance reduction methods in some way to reduce the immense compute requirements.
Flag For Ethics Review: ['Ethics review needed: Environmental Impact']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you very much for your review and comments on improving our work.
## Major comments:
1. Several performance metrics are common in reinforcement learning: best policy performance, performance during final k episodes, AUC, etc. We chose AUC because it informs us about the rate of learning. For example, in continual RL, the goal is not the final policy—there is none—and the main interest is learning, not the outcome of learning. Many applications of RL are similar, for example, in data-center cooling, learning happens in deployment. Another empiricist could care about final policy performance and measure sensitivity to that. We can add sensitivity analysis that uses final performance as a metric to the appendix for the final version.
2. This is a valid question regarding the method's stability regarding the environment set and score normalization. We used the statistical bootstrap specifically to represent the variance in min-max estimation. But, normalizing by a (p,1-p) percentile range may still be a better choice; we will add results with a (p,1-p) normalization. Thank you for the suggestion.
3. Different environment sets may produce different results. Min-max normalization, in particular, may be inappropriate in environments with exponential rewards scaling. In such cases, another choice of normalization may be preferable.
Regarding value target symlog’s benefit, our results and conclusions are indeed tied to the Mujoco environments we chose. We tried to avoid making general conclusions in the paper about the algorithms beyond the empirical setting that we tested in. If there is a sentence that is particularly problematic concerning this, please let us know. This is missing the important point that even in one common environment suite, sensitivity emerges and is a big deal. We chose PPO and Brax Mujoco to demonstrate that even with one of the most heavily studied algorithms in one of the most used environment suites, these definitions can provide insight.
As for “What characteristics do the environments have that we test robustness for” is a massive question—a paper all on its own—and we would argue that RL researchers have made little progress on this important but separate question.
4. In section 5, When leaving hyperparameters fixed, we set them to the values that maximize cross-environment performance (i.e. the default). The intuition behind the definition of effective hyperparameter dimensionality (and thus all of section 5) is trying to measure how many hyperparameters are “robust” and able to be left to default. Indeed, another huge challenge in RL: how do we establish sensible defaults.
## Technical comment
1. You are correct; this is wrong. This typo resulted from a change of notation for the algorithm set. This will be fixed for the camera-ready version.
## Minor Comments
1. Thank you for this suggestion. The notation change and noting the definition as a Jensen gap would be a nice improvement. We will change for the camera-ready version.
2. We can add a sentence or two of extra interpretation of the ranges of the sensitivity gap in the discussion. Thank you for the suggestion.
3. The blog post states that minibatch advantage normalization does not have much effect on performance. To the best of our knowledge, its relevance for sensitivity has not been well-studied. However, some intuition suggests that it may be important for making the entropy coefficient less sensitive. This was a stated [motivation](https://arxiv.org/pdf/2301.04104#page=6) for the (albeit different) form of advantage normalization performed in DreamerV3.
4. Thank you for letting us know this was a point of confusion. We will add to the main text that a grid search was performed.
5. Thank you for your suggested edit!
## Questions
1. You are correct that using IQR to normalize performance will be a lower variance estimate. We agree it may be more appropriate, especially when practitioners do not wish to run 200 seed experiments. We will add results that use IQR normalization.
We have used the term “run” synonymously with seed. Given that the goal of RL is to maximize expected return, the best seed will not be an informative alternative to mean performance across seeds. We assume that you meant something different with your use of the word run. Perhaps you meant final policy performance, as mentioned before? If so, we would happily include the results with the final performance in the appendix.
2. Yes, we can improve the discussion and conclusions. We will note in the discussion how the Mujoco environments have similar observations, reward scales, etc., and our results do not generalize to other classes of environments. Moreover, we will emphasize that PPO and Mujoco were chosen because they are heavily studied, and more importantly, they allowed us to demonstrate the metrics can still provide insight in these domains.
## Limitations
The presented methodology is:
1. Collect data about the performance of different algorithms concerning their hyperparameters across different environments
2. Normalize performance (via min-max, CDF, or some other way)
3. Using the data that has already been collected and normalized to compute sensitivity and effective hyperparameter dimensionality as we defined.
We are agnostic to how practitioners collect their data. Grid search will not scale to larger hyperparameter spaces, so something like a random search may be required and fits the proposed methodology. At the same time, please note that RL is incredibly noisy. There are many cases where studies with few seeds have greatly misled the community: https://arxiv.org/pdf/2304.01315
Statistical estimates require many samples; as you correctly noted, we use min and max for normalization, which are high variance. It made sense to use the available computation that we had to run more seeds instead of additional environments, algorithms, etc.
---
Rebuttal Comment 1.1:
Title: A promising paper that needs more polish
Comment: I thank and commend the authors for the strong points laid out in the global response, the revision of their figures, and their direct comments. I find figure 2 in the additional pdf (global response) extremely interesting and am surprised by the extreme contrast in results when comparing figure 4 (main paper) with figure 1 (pdf rebuttal).
---
I agree with the point that mujoco/ brax was only a test-set to showcase your visualization method, and I did not find any overly strong claims or conclusions in these sections in particular. However, the main problem that I have is that despite the costs of the experiments, and the proposed visualization, I can still not draw any insights from these results. Which is unfortunate.
The issue that I had is that this section has too strong of a focus on the results, for example, section 4.2 is a lengthy discussion of most parameters (the details of which, I believe were a bit beside the point) and section 4.3 simply enumerates everything we can see in figure 4.
I want to see a critical discussion of: how can we use this visualization method to gather insight and what can we do with this information. This should be accompanied by a well executed test-case. This should include multiple ways to look at the same data (different normalization, environment subsets, conditional parameters, or more). Like I mentioned before, I think figure 2 of the rebuttal-pdf adds tremendously to this discussion.
---
Considering all my previous points, I still think this paper lacks polish. It looks promising, but it is not quite there yet. I will raise my score from 3 to 4 and contribution from 1 to 3, since the authors did address many concerns.
---
Reply to Comment 1.1.1:
Comment: Here are two insights that can be drawn from Figure 2 of the pdf rebuttal.
While it has been known in the literature that observation normalization matters a lot for performance, we can now see that the performance gain comes with, or is even partially enabled by, increased sensitivity.
It has also been previously reported that advantage normalization does not matter for performance. This appears to be true. However, it somewhat lowers sensitivity while retaining similar performance, making it possibly more interesting for practitioners than the literature has presented it.
Thank you for pointing out that section 4.3 could be deeper. For the camera-ready, we will drastically shorten section 4.2 (or move it to the appendix) and expand section 4.3 to include additional discussion and figures (different normalization schemes, environment subsets, etc.) as discussed in the rebuttal and responses. | Summary: The paper proposes a method to analyse how sensitive are RL methods with respect to hyperparameter tuning. The author argue that one method may perform well on average but require more HPO tuning per task which hides some computation and prevent from having comparable results. They introduce a sensitivity metric which measures the difference between the best hyperparameter tuned per task and the performance of the best hyperparameter on average. Then, they propose a quadrant analysis where both the sensitivity and the performance of a tuned score are displayed which allows to compare algorithms on those two dimensions. Experiments are then performed on PPO normalization variants where they show that current normalization have different trade-offs: some approaches improves the scores but increase hyperparameter sensitivity while other conversely lower scores and sensitivity but no approach allows to improves both scores and sensitivity currently. Finally, the authors study how many hyperparameters require tuning while having the other fix and while keeping 95% of the best performance.
Strengths: * The paper is very well motivated. It tackles an important problem for RL (and in general on how to account for HPO sensitivity when reporting results.
* The paper is very well written and easy to follow, the methods are well described and the experiment are very sound
* The paper would provide a valuable contribution if the results of the runs are released (will they?, see my question)
Weaknesses: The paper is currently missing some analysis regarding the method stability (see questions). For instance how much the proposed method would be stable and reliable with respect to different environments (does adding one environment change the results completely?) or normalization (e.g. using CDF instead of min max).
Technical Quality: 2
Clarity: 3
Questions for Authors: Here are points that would be important for me to raise my score:
* In figure 4, are the results stable if you leave one out one environment? If you compute the plot 5 times each time leaving one of the environment, do you have similar plots? If not, then the analysis on the paper will be less warranted (that such and such method is more stable with hyperparameter).
* Will the paper release the dataset of evaluation in addition to the code? (the code would have limited interest compared to the data) I would highly encourage to share the dataset, possibly with a script to reproduce some of the paper figures (for instance as was done by https://github.com/google-research/rliable) The dataset would be also very useful to simulate and compare HPO methods, in particular if it contains the evaluation per iteration (and per 100s of iterations for instance).
* Figure 4: there are very large outliers for the obs zero mean normalization sensitivity and the distribution is very skewed, could you explain why they happen?
* One popular method is f-Anova to study hyperparameter importance, could you include an analysis using it? It would be useful for practitioners to know which HP are most useful.
* In the limitation section, you mention that the results may change under a different normalization (say CDF instead of min-max). This is indeed an important point, could you report the result for Fig4 as well with CDF? I assume it should be a minor change as it is just changing the metric. However, it seems important to assess how much the method would be impacted by different normalization (even if the results change, the paper would still be valuable)
Here are points that are more details:
- You have l110 p(w, e, h ,\kappa) and then p(a, e, h), it would be nice to unify the notation
- l151: I think it would be useful to precise that the quantity is always >= 0
- "The shaded region is a 95% Student t-distribution confidence interval around the mean return over 200 runs "
=> what is the CI not covering the mean in walker 2D?
Confidence: 4
Soundness: 2
Presentation: 3
Contribution: 3
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: First, thank you for your review and comments on improving our work. We will address your specific questions here and focus on general concerns shared between the reviews in our general rebuttal.
## Weaknesses
As requested, we have included 5 repeats of the Figure 4 plots, leaving out an environment each time in the PDF attached to the general rebuttal. In addition, another reviewer asked us to demonstrate results with (p,1-p) percentile normalization instead of min-max; we will perform this and include these results.
## Questions
1. We have included a figure in the PDF attached to the general rebuttal that does this. Leaving out environments does shift the reference point, and the position of the variants shifts somewhat relative to the reference point. However, the regional position of the variants is mostly consistent (e.g., observation ZM normalization is still more performative and sensitive than the reference point).
2. Yes, we will do this. The code for reproducing the plots was included in the supplementary material, but it is somewhat useless without the data. We will look into providing the data through Google Cloud or some other platform.
3. Digging deeper into this, we found a minor error in the plotting code. After re-running experiments, we found that it did not impact the main message of our paper or the vast majority of the particular empirical outcomes. The corrected Figure 4 will be included in the PDF for the general rebuttal.
4. The focus of our study has been characterizing algorithms’ performance across sets of environments with respect to their hyperparameters. The work we have seen that uses f-Anova has focused on characterizing the importance of hyperparameters within an environment, an important but slightly tangential goal. We could report the hyperparameters that were most impactful to tune per environment (measured when creating Figure 5) and report how the important hyperparameters and their values change when tuning pairs of hyperparameters jointly, etc. Do you believe this would be valuable?
5. Another reviewer suggested normalizing by a (p,1-p) percentile range instead of min-max, which has a lower variance. We will include this normalization and compare it with min-max in the final version.
As for CDF normalization, after further consideration, we are unsure if it makes sense to visualize it in the style of Figure 5. Taking an expectation through a non-linear operator does not preserve orderings. We normalize the AUCs of runs (seeds) and then average across runs to obtain the expected normalized performance. The interpretations of regions in Fig 4 may have very different meanings under CDF normalization. An algorithm could have a worse average return but be higher on the CDF normalized Figure 5 plot. E.g. some seeds perform slightly better in units of return, but are placed much higher in CDF from the nonlinearity. Nevertheless, it is a minor change and we can include results with CDF normalization in the appendix if requested.
## Detailed comments
1. Thank you for catching the notation error. It will be fixed for the camera-ready.
2. We will clarify this. Another reviewer suggested noting the connection to Jesen’s inequality. This will be clarified for the camera-ready version.
3. Thank you for catching this. After re-running the code and examining it, the confidence interval fully covers the mean. We suspect that a post-processing formatting issue introduced this error in the document. The corrected version is included in the general rebuttal PDF, and the figure will be corrected for the camera-ready version.
---
Rebuttal Comment 1.1:
Title: answer to rebuttal
Comment: Thank you for your answer.
1. Thanks for adding this figure, I think it is important to verify this stability to make sure the statement of the papers are applicable in generalized settings.
2. Great to hear, as mentioned in my review, this data can be valuable for other research.
4. The analysis you suggest makes sense and would provide value (I gave f-anova as an example but any analysis on hp importance would be valuable). I agree with you that studying hyperparameter importance is tangential to the main point of the paper but still seems related (it may be that some algorithms have the same hyperparameters to tune and other requires a larger set which is also interesting for practitioners, in addition to know which hyperparameters had the most effect)
5. I get your point about CDF, I dont mind if you use (p, 1-p) percentile transformation instead as long as you make sure that the results are not completely tied to one normalization. Regarding the downsides of the CDF you mention, I think those are standards points not necessarily tied to your use-case: the CDF gets rid of the scale (only the ordering matter), this has benefits (robustness to outliers, uniform distribution obtained) and downsides (sometimes the scale is important) which is why this normalization is good to have in addition to min-max as it has different trade-offs.
I have raised my score given that some of my points were addressed.
---
Reply to Comment 1.1.1:
Comment: We appreciate your comments and thank you again for your thoughtful review. | Summary: This paper proposes a new evaluation regime for reinforcement learning. As opposed to only taking into account benchmark performance (i.e., final return), as is ocmmon in previous literature, this work suggests considering an extra dimension of how sensitive algorithms are to hyperparameters in tuning based on a heuristic developed in the paper. They analyse PPO, and some of its variants, using this new evaluation regime, and find that performance increases often correspond to increased sensitivity to hyperparameters. Finally, the paper considers a top-k approach to hyperparameter tuning, adjusting only the most impactful hyperparameter, to see how the baseline algorithms compare under a more limited hyperparameter tuning regime.
Strengths: - The paper clearly takes care to use an extensive number of experiments, possibly reaching into the realm of unnecessary, to provide some rigour to their results.
- The domain being considered - considering the hyperparameter sensitivity of algorithms in addition to their performance - is an underexplored area and one which becomes increasingly important as the cost of experiments increase.
- Approaching this problem in a visual setting seems reasonable.
- I like the approach of distilling hyperparameter optimisation to optimising only a smaller set of more important values; this has real benefits in enabling tuning of the majority of hyperparameters in *cheap* environments and tuning the key values only in more expensive environments.
Weaknesses: - Noting appendix A (the table of how many hyperparameters each algorithm has); the definition of 'hyperparameters' seems pretty weak, and a lot of those included seem to just be design decisions of the actual algorithms.
- I find reading the plots quite confusing, exacerbated by all of the different colours marking each of the areas. I think this obfuscates the message of the plots and as a reader makes it hard to come to conclusions.
- Given this work is based on PurejaxRL, which I think runs on Brax for 1e8 frames, these experiments seem very short (3e6 frames) and possibly doesn't give the algorithms the full opportunity to converge.
- There is very limited discussion of preexisting literature in this space. While I appreciate this takes a subtly different tack (calling for us to measure how sensitive to hyperparameters RL algorithms are, rather than designing algorithms with few hyperparameters), I would expect to see significant more discussion of AutoRL literature. Framing this work better in related work would definitely strengthen the paper.
- The formatting looks quite off with the figures. I think this is because a lot of the captions are to the side of the figures, rather than below.
- Considering the actual hyperparameters being tuned, the results feels slightly disingenuous. If we assume the practitioner running the hyperparameter tuning, we would possibly expect them to select more reasonable values or focus the search in a significantly more targeted way. For instance, it is no surprise that the highly performant methods saw large performance decreases (i.e., hyperparameter sensitivity) when evaluated with entropy coefficients in the range [0.001, 10] or learning rates spanning 5 orders of magnitude. Instead, it would be much more sensible to explore how the performance changes with reasonable hyperparameters that one is more likely to practically tune over (i.e., we generally have a good idea of where to start with values, so are unlikely to try an entropy coefficient of 10).
- It would be good to see some exploration of algorithms which have done something different than just considering the changes to normalisation in PPO returns or observations. It feels like some broad statements are made despite the fact that this analysis was only taken in a single setting.
Technical Quality: 2
Clarity: 2
Questions for Authors: - In the plots, the authors state that they plot 95% confidence intervals over 200 seeds; but in the y-axis, all of the 'confidence intervals' are only negative. I am not particularly clear why. Is this a mistake, or am I misinterpreting results? At the same time, are the circles representing the mean score over runs, environments, or both; and therefore, are the confidence intervals defined over the runs, the enviroments or a big stack of (in this case) 1000 results per point? Is comparison for the hyperparameter sensitivity done per seed or for the averages over seeds? I think there's a lot of questions about how exactly these results are being presented which doesn't come through in the paper.
- In some cases, increased hyperparameter sensitivity can be a positive - it can give us extra opportunities to boost performance. Do you think this should be mentioned as a limitation of this method, which effectively calls for less sensitivity?
- Why did you use 200 seeds per hyperparameter tuning configuration, which is significantly greater than would be used in most other research? In practice, would it not have been more sensible to have used a much smaller number of seeds (eg 8), and focused on a much tighter range of hyperparameters?
- As a brief aside, the authors have not removed the instructions of the Neurips checklist as requested, which should be done if the camera ready copy is released.
Confidence: 3
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: The authors briefly discuss some limitations of their methods; in particular, that their proposed metrics are highly environment specific (even in this case, they are looking at only one suite of environments which may end up biasing results), and will need reevaluation every time you shift to a new set of domains. However, I think there are some other key limitations that are not discussed, many of which have been raised above. In particular:
- In addition to being environment specific, the metrics proposed are going to be very specific to the range of values which are tested. I have mentioned above how the values chosen for hyperparameter tuning here are not particularly reasonable.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your review. We appreciate your feedback.
## Weaknesses
1. We recognize that the delineation between hyperparameters and algorithm design can be fuzzy. We cite the sources used for our counts and, when possible, try to stick to hyperparameter lists identified by the original papers (e.g., the DQN Nature paper has a table of 16 hyperparameters, so the value in our table is 16). When this was not possible, as a guiding principle, we tried to identify parameters of the algorithm that could easily be modified and may contribute to performance gains if tuned per environment. We did not count parameters related to network architecture (e.g. number of neurons in hidden layers).
2. The two-dimensional plot is motivated by the fact that nuance is lost when algorithms are evaluated using one metric. Studying the relationship between sensitivity and performance can aid insight. Perhaps there is a better way to discriminate between regions other than colors. We would be happy to try out any suggestions the reviewer has.
3. The Mujoco experiments that we have observed in the literature ran for between 3e^6 - 5e^6 steps (e.g., OpenAI has published a 3e^6 step benchmark for the 5 environments that we tested: https://spinningup.openai.com/en/latest/spinningup/bench.html. Note we are evaluating based on AUC and not final performance; for final performance one might choose to run longer.
4. You are correct that we are addressing a different problem. Often, algorithms in the AutoRL literature aim to tune hyperparameters via some HPO process (which sometimes contain additional hyperparameters of their own). Future studies could use the definitions provided here to try to understand if the algorithms proposed in the AutoRL literature reduce sensitivity over the base algorithms that are being modified. We are proposing an evaluation methodology, not a solution to hyperparameter tuning.
5. We will improve formatting.
6. It is not uncommon to sweep stepsizes over multiple orders of magnitude see https://arxiv.org/pdf/2407.18840, which considers actor step size as low as 10^{-6} and as high as 10^{-2}). While it may seem a bit unusual to try an entropy coefficient of 10, consider that the proper order of magnitude of the entropy coefficient will depend heavily on the scale of the advantage term in the actor loss. This will be affected by the type of advantage normalization performed or the order of magnitude of the reward function if the advantage is left unnormalized. Existing literature (Table 4: https://arxiv.org/pdf/2306.01324 from the AutoRL literature) has already searched spaces of 4 orders of magnitude to find entropy coefficients for PPO in Brax. Considering that we are ablating various forms of normalization across different sets of environments, it does not seem unreasonable to try 5 different orders of magnitude. In our results, the entropy coefficient of 10 was not bad in every environment. In fact, there was a hyperparameter setting with an entropy coefficient of 10 for the observation normalization PPO variant that was in the 97th percentile of the settings we tried for Walker2d. Nevertheless, please note this paper's main contributions are the sensitivity and dimensionality metrics and the plots for visualizing them. The PPO study is used to demonstrate how these tools might be used.
7. It would be great to apply the sensitivity and effective dimensionality metrics to other algorithms, environment distributions, etc. Note that claims will always be limited to some small set of agents and environments (as you noted above); general claims are never possible, especially considering many RL benchmark environments have been developed specifically to test current algorithms. We were very careful not to make claims beyond the methods tested. If there are specific statements in the paper that you feel are overreaching, let's discuss them.
## Questions:
1. The confidence intervals were computed by creating bootstrap datasets, i.e., sampling seeds with replacement from the original dataset and computing the performance and sensitivity metrics on the bootstrap datasets. This creates distributions of algorithms' performance and sensitivity scores. The upper and lower endpoints of the confidence intervals are the 97.5th and 2.5th percentiles of the distributions. Please note that the collapse of the upper endpoint of the performance CI in Figure 5 resulted from a minor bug in the plotting code. We have re-run the bootstrap and included the corrected figure 5 in the attached PDF.
2. There are cases where a practitioner would be okay with increased sensitivity, especially if there are even larger increases in performance. The question highlights the benefit of studying the performance-sensitivity relationship of algorithms on a 2d plane, such as in Figure 4
3. Because our proposed metrics are nonstandard, we wanted to avoid statistical tests that use distributional assumptions (e.g., student-t assumes normality). This is why we chose to use percentile bootstrap. However, percentile bootstrap CIs are often quite wide; therefore, we chose to run many seeds to obtain useful CI estimates. There is a long line of work suggesting the current practice of fewer than 10 seeds is not enough:
https://arxiv.org/abs/2304.01315,
https://arxiv.org/abs/1904.06979,
https://openreview.net/pdf?id=Xe7n2ZqpBP,
https://rlj.cs.umass.edu/2024/papers/RLJ_RLC_2024_330.pdf
## Limitations:
1. The computed sensitivity metric is intimately tied to the distribution of environments used, the process for choosing hyperparameter values, the number of seeds, etc. An underlying assumption is that a practitioner cares about evaluating algorithms with respect to some distribution of environments. If the practitioner is consistent in their data collection process, these metrics can be used. As argued above, respectfully, we feel the ranges used were appropriate given the algorithmic variations under study.
---
Rebuttal Comment 1.1:
Comment: Thank you for extensive response to my review.
A few thoughts are below. Where satisfied, I have not included response to each point for brevity's sake.
# 2D plot
I think the key for me about the colouring is that it enforces a fairly arbitrary segmentation that I think gets in the way of analysis. I think, personally, I would prefer to see results simply plotted on these axes, with analysis separated into text.
# Comparison with AutoRL
While I agree there is a difference, I think the takeaway is that both AutoRL and your work have similar motivations - to promote systems where an environment is put in and a policy comes out without human input. As such, I still think this is worthwhile comparison. In a sense, you can consider that AutoRL algorithms are 'hyperparameter free' in that they deal with hyperparameters internally, and thus have no hyperparameter sensitivity.
---
In addition to the above, I still feel there is a missing component considering how human-in-the-loop and prior work would focus a lot of these search efforts by offering intuition about the kind of hyperparameter ranges which are useful. That said, I am so far satisfied that some of my concerns have been addressed sufficiently, and thus have increased my score from 3 to 4. I remain open to discussion about the above.
---
Reply to Comment 1.1.1:
Comment: 2D Plot
While color may not be the best choice for visualizing these segmentations, we don’t believe the segmentation is arbitrary. Each of the segmentations has a different interpretation of its relation to the reference point. The slope one line passing through the reference has unique importance as it marks the points where observed performance gains are directly attributable to per-environment hyperparameter tuning.
Comparison with AutoRL
We agree that, like AutoRL, we are interested in promoting methods that require less human intervention and tuning to apply. The key difference we see between our work and AutoRL is that we are demonstrating the utility of our methodology with an experiment on PPO, not proposing a new way to tune hyperparameters in RL. While AutoRL methods tune an RL algorithm's hyperparameters internally, it should be noted that the AutoRL algorithms themselves often have hyperparameters (e.g., the scaling factor and crossover factor parameters in DEHB). For the camera-ready, we will include an additional discussion of the AutoRL literature and how the AutoRL community could use our method to measure their effectiveness at improving performance while reducing (hyper-)hyperparameter sensitivity.
Thank you for your additional comments. We appreciate your feedback and discussion. | Summary: The paper introduces an empirical framework for assessing the hyperparameter sensitivity of reinforcement learning algorithms.
The framework consists of two metrics: 1. hyperparameter sensitivity, which gives a normalized difference in performance between the per-task best hyperparameters and the across-task best hyperparameters. 2. effective hyperparameter dimensionality, which gives the number of hyperparameters that can be left the same as the across-task hyperparameters while tuning the rest and still obtain a threshold say (95%) of the per-task best hyperparameter performance.
Using these metrics in addition to performance metrics, practitioners and researchers can have a better idea of the benefits/downsides of a modification to an algorithm (how much better vs how much more sensitive)
The framework is used to compare several normalization variants of PPO introduced in the past years on the continuous control environments, giving a better picture of their contribution.
Strengths: The problem is well-motivated. Hyperparameter sensitivity is a well-known issue in deep RL with novel algorithms often providing better performance at the expense of a higher sensitivity. Such tradeoff must be made explicit.
The framework introduced in the paper provides an effective way to draw that tradeoff with a clear interpretation of it using Figure 3.
The metrics are simple and quite natural and provide a solid starting point for a hyperparameter sensitivity framework.
Although computationally expensive and still to prove if its results transfer across domains, the framework is likely to have a high impact on the field.
At least on the continuous control with Brax and PPO variants where the framework has been used as an example.
In particular thanks to the thorough and accurate experimental protocol, with 200 seeds per experiment and confidence intervals.
Weaknesses: Metric definition:
- The effective hyperparameter dimensionality depends on the total number of hyperparameters of an algorithm and is likely to scale with it, so this makes it incomparable between algorithms with a different number of hyperparameters. Perhaps counting the number of parameters that changed instead would make it more comparable across algorithms, as is what would dictate the budget of practitioners eventually.
Hyperparameters:
- I would consider the minibatch size and the number of epochs in PPO to be critical hyperparameters as well. It's not clear how the choice of hyperparameters to sweep over was made.
- (minor) The epsilon in the denominator of the minibatch normalization may also play a big role.
Inconsistent/confusion notation:
in line 91, the tuple (w,h) defines an agent (a,), but this is used in a confusing and inconsistent way throughout the paper.
- line 116 $\hat p(a, e, h): if $a$ is there then $h$ is redundant. It seems like $a$ there stands for $w$.
- Equation 1 same. The $a$ should be a $w$.
- Equation 2 and 5: 2 and 5: $\Gamma(a, e, w)$ not it's even more confusing there is both an $a$ and an $w$ and there is an $h$ missing.
Claims:
Line 321 "vastly difference effective hyperparameter dimensionalities": the largest gap observed in the paper is from 2 to 4, so I would not extrapolate here, though arguably that's indeed an additional 2 dimensions to sweep over so scales exponentially.
Technical Quality: 3
Clarity: 3
Questions for Authors: Reporting the mean under the curve (AUC) with 95% bootstrapped confidence intervals over 200 runs is great, but I would expect some discussion on the use of a confidence interval, as it collapses the shaded area across the 200 runs to the statistic being computed (here the mean AUC).
To me, using a dispersion measure like the variance of the mean AUC would also be valid, as some hyperparameters would have more variance than others (although this could also be seen as an additional dimension of sensitivity).
I would appreciate it if the authors could comment on this.
Also, at what point are the confidence intervals computed? When computing the expected performance or when computing the hyperparameter sensitivity, etc?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors adequately state the limitations of their work and its broader impacts. In particular the potential limitation of the framework to a specific environment distribution.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you very much for your review. Your comments are appreciated for improving this work.
## Weaknesses
Metric definition:
The effective hyperparameter dimensionality already does this by counting the number of the hyperparameters necessary to obtain a large fraction of peak performance. If algorithm x has two very sensitive hyperparameters that need to be tuned per environment, and algorithm y has 40 hyperparameters, but only one of them is sensitive, where the others are okay being left to default, then algorithm x will have an effective hyperparameter dimensionality of 2 while algorithm y will have an effective hyperparameter dimensionality of 1.
Of course, as the number of hyperparameters grows, its number of sensitive hyperparameters may also grow, but this does not make it incomparable to other algorithms.
## Hyperparameters
Thank you for the suggestions! Based on our review of the literature and previous experience, we chose hyperparameters that appeared to be important. The hyperparameter choices you suggested would be very interesting to investigate. If you believe it would add to the paper, we would be willing to run additional experiments with batch size, epochs, etc., before the camera-ready deadline.
## Inconsistent notation
This resulted from a notation change for the algorithm set. Thank you for finding the inconsistency; it will be fixed for the camera-ready.
## Questions
The confidence interval is computed by creating bootstrap datasets, i.e., sampling seeds with replacement from the original dataset, and then computing the performance and sensitivity metrics on the bootstrap datasets. This creates distributions of the algorithms' performance and sensitivity scores. The upper and lower endpoints of the confidence intervals are the 97.5th and 2.5th percentiles of the distributions. Please note that the collapse of the upper endpoint of the performance CI in Figure 5 resulted from a minor bug in the plotting code. After re-running experiments, we found that it did not impact the main message of our paper or the vast majority of the particular empirical outcomes. We have included the corrected Figure 5 in the PDF attached to the general rebuttal.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for the clarifications.
Regarding the bootstrapped confidence intervals, what I mean is that the more samples available and the more points drawn at a time the smaller the interval will be.
This is different from a dispersion measure, like the variance, which would not shrink with more data.
I maintain my score.
---
Reply to Comment 1.1.1:
Comment: Thank you for your response and for clarifying. We are sorry that we missed this point in our initial response. We used a confidence interval as we are interested in understanding the exact sensitivity and performance under different algorithmic variations and so plot the confidence in our sample estimates. We will include a table in the appendix that reports the standard deviation across seeds of the AUC for each hyperparameter setting tested in each environment. | Rebuttal 1:
Rebuttal: Thank you to the reviewers for your questions, comments, and suggestions on improving our work! We appreciate your feedback. We have addressed specific comments in the individual rebuttals and will provide a general response to comments shared among the reviewers here.
The main contributions of our paper are the hyperparameter sensitivity and effective hyperparameter dimensionality metrics and the plots for visualizing them. There is confusion that because our study performed many runs with a grid search this is what we are advocating for. Our methodology does not require one to do a grid search, 200 seeds, AUC performance metrics, min-max score normalization, or to perform a percentile bootstrap. The hyperparameter sensitivity and effective hyperparameter dimensionality metrics are agnostic to these choices. If a practitioner made a different empirical design choice than us on one of these decision points, then they may very well see different results; what is most important is that they are consistent and fair when comparing algorithms in their study. For example, another empiricist could do a random search with 10 seeds per hyper-parameter setting, consider final policy performance, use a different score normalization technique, report standard errors, and still use the definitions and plots that we provide.
Several reviewers correctly noted that the sensitivity metric is intimately tied to the environment set and that our findings are thus limited to Brax Mujoco environments tested. We acknowledge this. No matter how large of an environment set is chosen, this will always be true; regardless of whether we include Atari, Minecraft, DMLab, or any other suite of environments, empirical results will not allow us to make claims about environments outside the evaluated environment distribution. What is being missed here is an important finding that even when there is only one common environment suite, sensitivity emerges and is a big deal. Limiting ourselves to Mujoco was a feature, not a bug of our study—notwithstanding the fact that these Mujoco environments are widely used to rank and evaluate new algorithms.
Several questions were raised regarding various empirical design choices. We addressed each question in the per-review rebuttals; however, we will also summarize our justifications for them here.
The 200 seed bootstrap CI was performed for the following reasons: min-max estimation is high variance, the sensitivity metrics we propose are nonstandard, we did not wish to rely on confidence intervals that make distributional assumptions, there are many forms of stochasticity in RL, and existing literature has demonstrated that studies with small numbers of seeds can be misleading. We dug deeper based on the interesting questions from the reviewers. It revealed a minor error in the bootstrap CI code that did not impact the main messages of the paper or the vast majority of the particular empirical outcomes. The good news is that after running our experiments again with the bug fixed, two things that did change helped answer two of the questions from the reviewers regarding the size of the confidence intervals. Please see the attached PDF for the corrected Figure 4.
As for the grid search, we believe the values chosen are appropriate based on existing literature: https://arxiv.org/pdf/2306.01324, https://arxiv.org/pdf/2407.18840, and the argument presented in our rebuttal to reviewer YuTN.
The performance metric AUC was chosen as it captures the agent’s rate of learning. Additional studies using other performance metrics may be valuable. We will add results with the final policy performance used as a metric to the appendix as requested by reviewer ydma.
The choice of min-max normalization is indeed high variance. An alternative (suggested by ydma) would have been to normalize by a (p,1-p) percentile range (such as IQR). We will include results with this type of normalization.
Finally, reviewers noted some notation typos, sentence errors, and suggestions for added discussion. Thank you very much for pointing them out! We will ensure that all of these edits are made for the camera-ready submission. We appreciate your questions, comments, and criticisms for improving this research.
Pdf: /pdf/f9d8370502fb5c31075aa44c93112fbf9b6a99a7.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Private Edge Density Estimation for Random Graphs: Optimal, Efficient and Robust | Accept (spotlight) | Summary: This paper studies edge density estimation for Erdos-Renyi random graph under node-level differential privacy. They then show that their approach can actually be used to estimating edge density for inhomogeneous graphs. In particular, this paper proposes an efficient algorithm with optimal privacy cost $O(1/(\varepsilon n \sqrt{np}))$, which is negligible to the non-private error for any epsilon in a moderate regime. To achieve this, the main technical approach in this paper is to design a new sum-of-squares algorithm for robust edge density estimation, and then based on the reduction from privacy to robustness in Hopkins et al., this paper uses an exponential mechanism whose score function is based on the correspoinding the sum-of-squares program.
To design a robust algorithm for edge density estimation of ER random graphs, this paper established several polynomial constraints to ensure that a ER random graph will satisfy with high probability. The proof that if a graph meets these constraints, its average degree will remain close to $d$ is straightforward enough for the sum-of-squares proof system, extends the utility guarantee of the polynomial program to its semidefinite programming relaxation, resulting in a polynomial-time robust algorithm.
Strengths: 1. Graph parameter approximation under node-level privacy is an important topic, and this paper gives optimal algorithm with a polynomial time implementation by sum-of-squares relaxation, under the most natrual setting of random graphs.
2. Their techniques of using sum-of-squares method for exponential mechanism is sophisticated.
3. All theorems are clearly stated and technically correct.
Weaknesses: As far as I can see, the most relevant work is Chen et al. "Private graphon estimation via sum-of-squares", but it appears to me that this paper lacks discussion and comparison in [Chen et al.]. Please see "Questions" for details. I would be happy to raise my score if all my questions are adressed.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Could you please elaborate on which part of your proof technique deviates from that in [Chen et al.]? Alternatively, are the proof techniques similar, while you are just addressing different problems in this paper?
2. There is also a theorem in [Chen et al.] for edge density estimation of stochastic block random graphs (Lemma 4.10), and it appears that their utility is better. Could you explain the fundamental obstacle between stochastic block random graphs and the more general inhomogeneous graphs that prevents the estimation for inhomogeneous graphs from achieving results as good as those for stochastic block random graphs? Would you mind adding more discussion about the results in [Chen et al.]?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: This paper discusses several limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Question 1:**
> Could you please elaborate on which part of your proof technique deviates from that in [Chen et al.]? Alternatively, are the proof techniques similar, while you are just addressing different problems in this paper?
**Response 1:**
On the one hand, the algorithm in [CDd+24] uses an edge density estimation algorithm based on [SU19] as a preprocessing step and proceeds by an sum-of-square exponential mechanism to privately estimate the underlying graphon.
Concretely for the task of edge density estimation, [CDd+24] does not improve over [SU19] and uses in fact the same algorithm as [SU19].
To improve the edge density estimation results in [SU19], we developed a new private edge density estimation algorithm using the sum-of-squares hierarchy, which is completely different from the techniques in [SU19] based on smooth sensitivity.
On the other hand, although [CDd+24] also relies on the sum-of-squares exponential mechanism for graphon estimation, our program constraints and identifiability proof are significantly different.
There are two particularly notable technical differences:
- Our strategy is closely related to the reduction from robustness to privacy framework developed in [HKMN23], while [CDd+24] relies on sum-of-squares Lipshitz extensions.
- The private graphon estimation algorithm in [CDd+24] relies on a single sum-of-squares program, while our algorithm has two stages: the rough estimation and the fine estimation, each with a different sum-of-squares program. The reason is that we found it difficult to achieve nearly optimal error rate using a single sum-of-squares program.
**Question 2:**
> There is also a theorem in [Chen et al.] for edge density estimation of stochastic block random graphs (Lemma 4.10), and it appears that their utility is better. Could you explain the fundamental obstacle between stochastic block random graphs and the more general inhomogeneous graphs that prevents the estimation for inhomogeneous graphs from achieving results as good as those for stochastic block random graphs? Would you mind adding more discussion about the results in [Chen et al.]?
**Response 2:**
We will add more discussion about the results in [CDd+24] in our proceedings version.
For the task of edge density estimation, [CDd+24] actually does not make use of any structure specific to stochastic block random graphs, but just treat them as inhomogeneous random graphs.
So [CDd+24, Lemma 4.10] is essentially a result on edge density estimation of inhomogeneous random graphs.
The algorithm behind [CDd+24, Lemma 4.10] uses an edge density estimation algorithm based on [SU19].
The error bound stated in [CDd+24, Lemma 4.10] is actually only the privacy cost. The total error bound of [CDd+24, Lemma 4.10] should also include the non-private error which is the same as the non-private error in our Theorem 1.6.
In terms of privacy cost, our Theorem 1.6 actually improves over the guarantees of Lemma 4.10 in [CDd+24].
More specifically, the privacy cost of [CDd+24, Lemma 4.10] is
$$
|{\hat{\rho}-\rho}|^2 \leq \tilde{O} \left(\frac{R^2\rho^2}{\epsilon^2 n^2}+\frac{1}{\epsilon^4 n^4}\right) .
$$
In comparison, our Theorem 1.6 gives a privacy cost of
$$
|{\hat{\rho}-\rho}|^2 \leq \tilde{O} \left(\frac{R^2\rho^2}{\epsilon^2 n^2}\right) .
$$
**Reference:**
- [CDd+24] Chen, Hongjie, et al. "Private graphon estimation via sum-of-squares." Proceedings of the 56th Annual ACM Symposium on Theory of Computing. 2024.
- [SU19] Ullman, Jonathan, and Adam Sealfon. "Efficiently estimating erdos-renyi graphs with node differential privacy." Advances in Neural Information Processing Systems 32 (2019).
- [HKMN23] Hopkins, Samuel B., et al. "Robustness implies privacy in statistical estimation." Proceedings of the 55th Annual ACM Symposium on Theory of Computing. 2023.
---
Rebuttal Comment 1.1:
Comment: Thank you for the rebuttal. | Summary: The authors propose a robust and efficient (polynomial time) algorithm to estimate the edge density of Erdos-Renyi graph, which achieves optimal error up to logarithmic factors. The authors also design an optimal algorithm for inhomogenous graphs where edges are drawn iid from different Bern(p_i).
Strengths: 1. The theoretical results are very strong. The authors prove matching (up to logarithmic factors) error bounds for both Inhomogeneous and Erdos-Renyi graphs. The algorithms are polynomial-time.
2. The techniques are interesting. The authors use recent results for the connection between privacy and robustness. To design a time-efficient algorithm, the authors go beyond simple reduction from privacy to robustness and apply the sum-of-squares method.
3. The paper is well-written.
Weaknesses: There is no obvious weakness with regard to the results and technical parts of the paper. The only minor weaknesses are the problem formulation.
1. Both the Erdos-Renyi graph and the inhomogeneous model assume edges are independently chosen. In real applications, graphs are often fixed, and edges may not be independent.
2. The algorithm only works for edge density estimation. There should be many other interesting graph statistics, even on the Erdos-Renyi graph.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Is it possible to design algorithms when the entries of the connection probability matrix are not independent?
2. Do any algorithmic ideas translate to other graph statistics like the number of triangles and k-stars?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Limitations and negative social impact are adequately addressed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Question 1:**
> Is it possible to design algorithms when the entries of the connection probability matrix are not independent?
**Response 1:**
Yes, our algorithm guarantees can extend to some settings where the edges in the graph are not independent:
- As our algorithm is robust under node corruption, the guarantees still hold even under minor dependence between the edges in the graph.
- Moreover, we expect that the guarantees of our algorithm easily extend to graphs with bounded maximum degree and bounded spectral norm, as we only exploit these properties of the graph for adding constraints to the sum-of-squares semidefinite programming.
These assumptions are easy to test given the graph instance, and are significantly weaker than the independence assumption.
**Question 2:**
> Do any algorithmic ideas translate to other graph statistics like the number of triangles and k-stars?
**Response 2:**
This is a great question.
- For Erdos-Renyi random graphs, as the only parameter for the distribution is the edge density $p^\circ$, we can obtain non-trivial (and potentially optimal) guarantees for estimating the number of triangles/k-stars by relating the expectation to $p$.
- For inhomogenous random graphs, we are not so sure. We expect our techniques can lead to private algorithms for counting constant-size subgraphs such as triangles and k-stars with non-trivial guarantees, when the inhomogeneity (measured by the ratio between the maximum edge connection probability and the average edge density) is bounded.
The observation is that, when a small $\eta$-fraction of the nodes are corrupted, the number of triangles/k-stars in a large induced subgraph of the given corrupted graph with bounded degree and supported on $(1-\eta)n$ vertices is still close to the total number of triangles/k-stars in the original uncorrupted random graph.
---
Rebuttal Comment 1.1:
Title: Thanks for your response
Comment: Thanks for addressing my questions. I am keeping my score. | Summary: This paper gives the first polynomial time DP algorithm for estimating edge density of random graphs (Erdos-Renyi and inhomogeneous). The authors also give information-theoretic lower bounds to show that the error achieved is optimal (up to log factors). The paper utilizes the recent results of Hopkins et al who gave a black box reduction from privacy to robustness via a sum-of-squares exponential mechanism to design their new algorithm. Their main new contribution is a sum-of-squares algorithm for estimating the edge density which they then use together with the Hopkins et al framework to get their final algorithm.
Strengths: The main result of the paper is novel and significantly advances the area of DP algorithms for estimating graph parameters in random graphs.
Weaknesses: The techniques used heavily rely on the recent framework given by the Hopkins et al STOC paper. What is unclear to me is --- what were the challenges in designing the sum-of-squares algorithm for estimating the edge density? Since this algorithm is the main contribution of the paper in some sense, I would like to understand better if the design of the sum-of-squares algorithm itself was previously known and one only needed the Hopkins et al framework to obtain this result.
General comment about the paper --- you should define the quantity edge density formally somewhere since the whole paper hinges on the reader understanding what it is.
Technical Quality: 3
Clarity: 3
Questions for Authors: The results of this paper are very important and exciting. However, can you expand on the challenges in designing the sum-of-squares algorithm for estimating the edge density? I would like to be more convinced that this result is not simply piggybacking off on the Hopkins et al result. Thanks!
Confidence: 2
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: There was no explicit discussion regarding the limitations of this work in the main body.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Comment 1:**
> You should define the quantity edge density formally somewhere since the whole paper hinges on the reader understanding what it is.
**Response 1:**
Thank you for pointing it out! We will add a formal definition in our proceedings version.
**Question 2:**
> Can you expand on the challenges in designing the sum-of-squares algorithm for estimating the edge density? I would like to be more convinced that this result is not simply piggybacking off on the Hopkins et al result.
**Response 2:**
This is a great question. We will discuss in more detail in the proceedings version.
- [HKMN23] and [AUZ23] showed a general connection between privacy and robustness. In general, this connection does not provide guarantees in terms of computational complexity. However there are several important examples where this connection can be used to transform efficient robust algorithms to efficient private algorithms.
- For the problem of robust edge density estimation under node corruption, there is no known sum-of-squares algorithm before our work, and we are only aware of an iterative algorithm [AJK+22]. For such algorithms not based on convex relaxation, it is completely unclear how to use the aforementioned connection between privacy and robustness estimation toward an efficient private algorithm.
- We cannot just apply previous robust mean estimation algorithms based on sum-of-squares (e.g. Hopkins et al 2024). On the one hand, if we view edge density estimation as a one-dimensional Bernoulli mean estimation task, then previous algorithms are only optimal under edge corruption, but suboptimal for the node corruption model. On the other hand, if we view edge density estimation as an $n$-dimensional mean estimation task, then samples (i.e. rows of the adjacency matrix) are not independent.
- In general, when designing a sum-of-squares algorithm, the main challenges are identifying the right set of polynomial constraints and coming up with sum-of-squares proofs.
**Reference:**
- [AJK+22] Acharya, Jayadev, et al. "Robust estimation for random graphs." *Conference on Learning Theory*. PMLR, 2022.
- [HKMN23] Hopkins, Samuel B., et al. "Robustness implies privacy in statistical estimation." *Proceedings of the 55th Annual ACM Symposium on Theory of Computing*. 2023.
- [AUZ23] Asi, Hilal, Jonathan Ullman, and Lydia Zakynthinou. "From robustness to privacy and back." *International Conference on Machine Learning*. PMLR, 2023.
---
Rebuttal Comment 1.1:
Comment: Thanks for clarifying, I have updated my score. | Summary: This paper presents a sum-of-squares-based polynomial-time differentially node-private algorithm for estimating the edge density of Erdos-Renyi random graphs that achieves optimal accuracy; the algorithm is simultaneously robust to corruptions. Specifically, if p is the true Erdos-Renyi parameter, then the estimate q released by the algorithm has error |q/p - 1| bounded by 1/(n sqrt{p}) + 1/(eps n \sqrt{np}) + eta / sqrt{np} (up to factors of log n), where eta is the corruption rate for robustness. Up to log n factors, the first term is the information-theoretically optimal error rate with neither privacy or robustness (which is also the error of the empirical edge density), and the third term is known to be required for robustness ([AJK+22]). The paper further shows a lower bound demonstrating that the remaining error term is necessary for privacy (a lower bound had been shown by BCSZ18 for a closely related family of random graphs but not for standard Erdos-Renyi graphs, so the new lower bound closes this gap). Moreover, the paper extend this result to present an algorithm for the much more general problem of edge density estimation in inhomogeneous random graphs (e.g. SBMs or graphons) and proves lower bounds in this setting as well showing that this algorithm also achieves optimal error rate up to logarithmic factors.
Strengths: This paper achieves optimal accuracy in polynomial time for both private and robust edge density estimation of Erdos-Renyi graphs and for the more general setting of inhomogeneous random graphs, nicely closing the gaps from prior work. It achieves this by bringing a new approach to the problem: the sum-of-squares framework, along with leveraging a reduction from privacy to robustness. The paper is generally well-written.
For DP edge density estimation of Erdos-Renyi graphs (even without requiring robustness), the new algorithm matches the (already optimal) accuracy achieved by [BCSZ18] while running in polynomial time instead of exponential time; it also improves over the rate of the polynomial-time algorithm of [SU19], which was suboptimal in the sparse or very-high-privacy regimes (that is, when epsilon * sqrt{pn} << 1). For robust edge density estimation of Erdos-Renyi graphs (even without requiring privacy), the new algorithm also improves of the accuracy of prior work ([AJK+22]) in the sparse regime. Thus, the Erdos-Renyi result improves on SOTA for both privacy and robustness, achieving the optimal bound (up to log n factors). Moreover, the results are extended to the more challenging inhomogenous random graph setting, achieving optimal rate in polynomial time here as well, and novel, tight lower bounds are shown for both settings as well.
Weaknesses: The discussion of prior work is clear for the Erdos-Renyi setting, but it would be useful to have a clearer discussion of what was previously known for the more general inhomogeneous random graph setting under DP and under robustness; see the specific questions 1--2 below.
Technical Quality: 4
Clarity: 4
Questions for Authors: 1) How does the rate achieved for inhomogeneous random graphs compare to the rate achieved by the exponential-time algorithm of [BCSZ18] for graphons?
2) Is there prior work on robust estimation of general inhomogeneous random graphs, and if so, what rate was achieved?
3) Can we hope to extend the privacy lower bounds (Theorem 1.5 and 1.8) to (epsilon, delta)-DP and not just epsilon-DP?
Minor comment:
line 188: should d^\circ be defined as (n-1) p^\circ instead of n p^\circ?
Confidence: 3
Soundness: 4
Presentation: 4
Contribution: 4
Limitations: Yes, adequately addressed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Question 1:**
> How does the rate achieved for inhomogeneous random graphs compare to the rate achieved by the exponential-time algorithm of [BCSZ18] for graphons?
**Response 1:**
[BCSZ18] uses the Laplace mechanism for edge density estimation of graphons.
The privacy cost of their algorithm is $\mathrm{polylog}(n) R/(\epsilon d)$, significantly worse than $\mathrm{polylog}(n) R/(\epsilon n)$ in our result.
**Question 2:**
> Is there prior work on robust estimation of general inhomogeneous random graphs, and if so, what rate was achieved?
**Response 2:**
To the best of our knowledge, no previous work had studied robust estimation of general inhomogeneous random graphs.
**Question 3:**
> Can we hope to extend the privacy lower bounds (Theorem 1.5 and 1.8) to (epsilon, delta)-DP and not just epsilon-DP?
**Response 3:**
Our proof of the $\epsilon$-DP lower bound uses a packing argument. Packing arguments can also be used to prove ($\epsilon$,$\delta$)-DP lower bounds, but they usually would not result in meaningful ($\epsilon$,$\delta$)-DP lower bounds.
A standard and powerful tool to prove ($\epsilon$,$\delta$)-DP lower bounds is the so-called fingerprinting technique [BUV14], which is totally different from packing arguments.
We expect some non-trivial work is needed to prove meaningful ($\epsilon$,$\delta$)-DP lower bounds for this problem.
This is an interesting open question.
**Comment 4:**
> line 188: should $d^\circ$ be defined as $(n-1) p^\circ$ instead of $n p^\circ$?
**Response 4:**
Thank you for pointing this out.
We define $d^\circ = n p^\circ$ just for notational convenience.
Strictly speaking, the expected average degree should be $(n-1) p^\circ$ instead of $d^\circ$.
However, these two quantities are equivalent as $(n-1) p^\circ = \frac{n-1}{n} d^\circ$.
**Reference:**
- [BUV14] Bun, Mark, Jonathan Ullman, and Salil Vadhan. "Fingerprinting codes and the price of approximate differential privacy." Proceedings of the forty-sixth annual ACM symposium on Theory of computing. 2014.
- [BCSZ18] Borgs, Christian, et al. "Revealing network structure, confidentially: Improved rates for node-private graphon estimation." *2018 IEEE 59th Annual Symposium on Foundations of Computer Science (FOCS)*. IEEE, 2018.
---
Rebuttal Comment 1.1:
Comment: Thank you for your responses to my questions. | Rebuttal 1:
Rebuttal: We are very grateful to all reviewers for constructive feedback.
We will incorporate these helpful suggestions into the proceedings version of our paper.
**Question:**
What is the key conceptual contribution of our paper, compared to previous work [BCSZ18] and [SU19]?
**Answer:**
[BCSZ18], [SU19] and our work all consider $\epsilon$-differentially node private algorithms for edge density estimation under Erdos-Renyi graph distribution $\mathbb{G}(n,d^\circ/n)$.
- The privacy cost of [SU19] is negligible when $\epsilon \gg (nd^\circ)^{-1/4}$. In comparison, the privacy cost for our algorithm is negligible when $\epsilon \gg n^{-1/2}$, which is a significantly wider privacy parameter range. Our guarantee matches that of the exponential-time algorithm in [BCSZ18].
- [BCSZ18], [SU19] and our work use different techniques. [BCSZ18] uses a generic Lipschitz extension based on exponential mechanisms. [SU19] uses the smooth sensitivity technique. Our work uses the sum-of-squares exponential mechanism.
- [SU19] can only exploit the degree concentration property of Erdos-Renyi graphs. In fact, the privacy cost of [SU19] is optimal on degree-concentrated graphs and thus cannot be further reduced. The reason why our algorithm can surpass their lower bound is that our algorithm can also make use of spectral norm. More specifically, we exploit the property of Erdos-Renyi graphs that the spectral norm of centered adjacency matrix is bounded by $\tilde{O}(\sqrt{d})$. Note for graphs with maximum degree $d$, the centered adjacency matrix can have spectral norm as large as $\Omega(d)$.
- Moreover, we show that the privacy cost of our algorithm is information-theoretically necessary (up to a $\log n$ factor). In this sense, our result is nearly optimal.
- We also extend our results to the inhomogenous random graph models, improving the results from [CDd+24].
**Reference:**
- [BCSZ18] Borgs, Christian, et al. "Revealing network structure, confidentially: Improved rates for node-private graphon estimation." 2018 IEEE 59th Annual Symposium on Foundations of Computer Science (FOCS). IEEE, 2018.
- [SU19] Ullman, Jonathan, and Adam Sealfon. "Efficiently estimating erdos-renyi graphs with node differential privacy." Advances in Neural Information Processing Systems 32 (2019).
- [CDd+24] Chen, Hongjie, et al. "Private graphon estimation via sum-of-squares." Proceedings of the 56th Annual ACM Symposium on Theory of Computing. 2024. | NeurIPS_2024_submissions_huggingface | 2,024 | Summary: The paper concerns the graph density estimation question for random Erdos-Renyi graphs. The optimal algorithm, without additional constraints, for this problem is to output the density of edges in the graph.
The paper considers a node-private robust flavor of the question, however:
1. The output is not supposed to reveal much information about any node. More precisely, for any node, the distribution on estimates the algorithm produces is not supposed to be significantly different whether it is included or not. (This task is much easier to achieve in the edge privacy model.)
2. The algorithm is supposed to be robust, i.e., if the edges involving a small number of vertices are arbitrarily edited, this should not throw the estimate off significantly. (This may require, for instance, disregarding some fraction of vertices that are significantly off.)
The paper gives a near optimal algorithm in this setting that runs in polynomial time and it establishes asymptotic bounds on the error achieved by the algorithm.
Additionally, the paper gives an extension of the result to non-homogeneous random graphs.
Strengths: This is a technically impressive achievement with the paper combining many desirable properties of algorithms for a natural toy problem. This is an interesting result for anyone studying private estimation and private release of graph parameters.
Weaknesses: This is not directly a very practical problem, since Erdos-Renyi graphs do not appear frequently in natural contexts and even the generalization assumes that the selection of different edges is independent, which is not true for many models of real-world graphs.
The algorithm may be difficult to implement and run in practice due to the complexity of tools used, which include semidefinite programming.
Technical Quality: 4
Clarity: 3
Questions for Authors: Just to make sure, in the inhomogeneous setting, the algorithm has to know $Q^\circ$, right? Otherwise, it could be a zero-one matrix equal to the graph adjacency matrix. What guarantees can be given if $Q^\circ$ is unknown?
Confidence: 3
Soundness: 4
Presentation: 3
Contribution: 3
Limitations: No limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Comment 1:**
> This is not directly a very practical problem, since Erdos-Renyi graphs do not appear frequently in natural contexts and even the generalization assumes that the selection of different edges is independent, which is not true for many models of real-world graphs.
**Response 1:**
Indeed, developing realistic and mathematically tractable random graph models is an important and long-standing question in the field.
Our algorithm and theoretical analysis extend to more general graphs in the following aspects:
- In the paper, we extend our algorithms and theoretical guarantees for the much more general inhomogeneous random graph models, where we make minimal assumptions beyond the independence between the edges.
Also, since our algorithm is robust under node corruption, the guarantees will still hold even under minor dependence between the edges in the graph.
- Moreover, we expect that the guarantees of our algorithm easily extend to graphs with bounded maximum degree and bounded spectral radius, which goes beyond the setting where the edges in the graph are independent.
Particularly these assumptions are easy to test given the graph instance.
Therefore, our work can be viewed as a first step towards private and efficient algorithms for statistical estimation in more realistic graph models.
**Comment 2:**
> The algorithm may be difficult to implement and run in practice due to the complexity of tools used, which include semidefinite programming.
**Response 2:**
We completely agree that it remains a fascinating and important open question to obtain algorithms for this problem that are more practical or at least have better running times, ideally nearly-linear time.
Our theoretical work shows that polynomial-time algorithms for this problem exist and that there are no complexity-theoretic obstacles toward practical algorithms.
We believe that such basic, polynomial time solvable problems ought to have practical algorithms.
We hope that some of the ideas behind our algorithm could pave the way toward such practical algorithms for this basic problem.
**Question 3:**
> Just to make sure, in the inhomogeneous setting, the algorithm has to know $Q^\circ$, right? Otherwise, it could be a zero-one matrix equal to the graph adjacency matrix. What guarantees can be given if $Q^\circ$ is unknown?
**Response 3:**
No, our algorithm does not need to know $Q^{\circ}$. We assume there is an unknown $Q^{\circ}$. Given a graph generated by $Q^{\circ}$, our goal is to estimate the average of the entries in the matrix $Q^{\circ}$.
The privacy cost of our algorithm is $\frac{R \log n}{\epsilon n}$ where $R$ is the ratio between the largest entry of $Q^{\circ}$ and the average of $Q^{\circ}$.
For $Q^{\circ}$ equal to a graph adjacency matrix, we have $R=1/p$ where $p$ is the edge density of the graph.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response and in particular, replying to my question. | null | null | null | null | null | null |
LLM Evaluators Recognize and Favor Their Own Generations | Accept (oral) | Summary: This paper investigates whether large language models can identify their own generations in two settings: when they have to distinguish between their output and another output from another large language model or person, and when they are only given an output and must score according to a Likert-scale 1-5. They then investigate the correlation between self-recognition and self-preference. They find their explanation resists some confounders.
Strengths: - They authors find novel results that will significantly influence the training and evaluation of some models
- Simple and clear presentation of methodology, results, and limitations
- Compelling analysis into potential confounders
Weaknesses: No major weaknesses - the paper was a pleasure to read!
Small points:
- L103 should have “GPT-3.5” instead of “GPT-3”
- L413 starts with “??e collect”
Technical Quality: 3
Clarity: 4
Questions for Authors: - To compute the final self-preference rating, why did you average the five possible scores weighted by the output probability rather than greedily selecting the score with the highest probability?
- On line 106, you noted “... goes against our intuition that self-recognition scores should increase as the dissimilarity between evaluator and evaluatee increases”. It’s unclear to me whether you still share this intuition after concluding this research?
Confidence: 3
Soundness: 3
Presentation: 4
Contribution: 4
Limitations: The authors address this.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 9
Code Of Conduct: Yes | Rebuttal 1:
Comment: Thank you for the feedback on presentation! We will incorporate them in the revision. Here we'd like to respond to the two questions:
> why report weighted average rather than greedily selected quality score
We do so mainly due to the insensitivity of LLMs in the individual measurement setting. We frequently see LLMs assigning the same score to most or all summaries. If we report the greedily sampled score, we will essentially see zero recognition/preference across the board. To get a better idea of the fine-grained differences that the LLMs might pick up, we choose to report the weighted average.
> ...intuition that self-recognition scores should increase as the dissimilarity between evaluator and evaluatee increases
Great question. We are still undecided on this intuition. This specific result can be an isolated phenomenon due to GPT-4’s bias to not recognize anything as written by itself. In the follow-up work, we will expand experiments to a wider range of frontier models and include base models that are less affected by RLHF fine-tuning.
Thanks again for your time!
---
Rebuttal Comment 1.1:
Comment: I accept the first argument. I am excited to see this follow-up work. | Summary: Self-evaluation is widely adopted but can lead to self-preference in that the LLM evaluator scores its own outputs higher than others while the qualities are actually equal. This paper finds that LLMs prefer their own generation because they recognize themselves. This paper conducts experiments on two summarization tasks and 3 LLMs. First, the authors show that LLMs exhibit self-preference in self-evaluation and they have a good ability in self-recognition. Then, the authors fine-tune the LLMs to make the ability of self-recognition almost perfect. They find that the self-preference strength is linearly correlated with self-recognition. The authors conduct various experiments to avoid confounding from the quality differences, ordering, fine-tuning improving quality, and other confounders from fine-tuning. Finally, the authors mention two safety concerns related to their findings.
Strengths: 1. This paper is well-written and easy to follow. All the details are shown in either the main paper or the appendix.
2. The authors sufficiently mention previous works and clearly claim their own contribution against them, without over-claiming their contribution.
3. The authors intentionally design more supplementary experiments and analyses to rule out the confounder.
Weaknesses: 1. Correct me if I am wrong. I think the hypothesis of this paper is problematic. It is not "an LLM prefers a sentence due to it is generated by itself", but "an LLM generates a sentence due to it prefer it". As for your experiment on increasing self-recognition leads to the rise of self-preference, please refer to weakness 2.
2. Correct me if I am wrong. I think there is another confounder in fine-tuning. Is there any chance that the fine-tuning lets the model learn a shortcut that selects the labeled one? For example, using self-recognition data to fine-tune an LLM may make the LLM overfiting to select the self-recognized one even though it is asked to select the high-quality one. This potential issue can be eliminated by checking if the LLM is more likely to select the shorter one as the high-quality one after fine-tuning on the task of selecting the shorter one.
3. The scope is limited. This paper only conducts experiments on 2 summarization datasets and 3 LLMs, leading to the concern of the generalization of their findings.
Technical Quality: 2
Clarity: 3
Questions for Authors: 1. In line 31, you mention "Is self-preference truly self-preference, in the sense that the LLM prefers a text because it was generated by itself?". Do you think it is the opposite: "LLM generates a text because it prefers it"? If so, I think the main hypothesis of this paper is problematic. It is not "an LLM prefers a sentence due to it is generated by itself", but "an LLM generates a sentence due to it prefer it".
As for your experiment on increasing self-recognition leads to the rise of self-preference, as I mentioned in weakness 1, it may be due to the shortcut learned during fine-tuning. I strongly recommend you to conduct another experiment, such as checking if the LLM is more likely to select the shorter one as the high-quality one after fine-tuning on the task of selecting the shorter one.
2. Besides, there is an easier way to verify if LLM prefers their own generation because they recognize themselves. We can directly tell them which one is generated by themself and which one is not. And then ask them to evaluate which one is better. If the self-preference ratio increases, we can conclude that LLM prefer their own generation because they recognize themselves.
3. Experiments on 3 LLMs on 2 summarization benchmarks are too small to convince me of the generalization of the findings.
4. Typos:
Line 413: ??
I will adjust my rating if you can address my concerns.
Confidence: 3
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: There is no potential negative societal impact. The authors have discussed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Comment: Thank you for the careful review. The two experiments you suggested are in fact already in the paper (please let us know if there are better ways to highlight them). Below we provide a response which should hopefully clear up the confusion.
> Correct me if I am wrong. I think the hypothesis of this paper is problematic. It is not "an LLM prefers a sentence due to it is generated by itself", but "an LLM generates a sentence due to it prefer it".
This is incorrect. Since all the summaries are generated by LLMs without reference summaries, the hypothesis "an LLM generates a sentence due to it prefer it" would not even comply with the generation process.
> Correct me if I am wrong. I think there is another confounder in fine-tuning. Is there any chance that the fine-tuning lets the model learn a shortcut that selects the labeled one?
This is incorrect. First, the fine-tuning examples are distinct from the evaluation examples, and the ordering of the two options are randomized & balanced for both, so the increase in self-preference cannot be trivially explained. If by “overfitting” and “shortcut learned during fine-tuning” you are referring to the phenomenon that in self-preference evaluation, the model recognizes patterns it learned about its own generation in fine-tuning, and that causes more bias—this would exactly in line with our causal hypothesis that self-preference is caused by recognition of itself.
We are not trying to explain *how* the model is recognizing itself; our hypothesis is focused on its effect on self-preference. We only control for confounders between these two properties, not shortcuts that the model can use for self-recognition. Note that the experiment suggested by the reviewer is already in the paper—fine-tuning to select a shorter/longer summary from the pair (Section 3.4)—along with many other control tasks. This did not significantly affect either self-recognition or self-preference strength.
> Limited scope
We agree that experimenting on more tasks is worthwhile and will update the paper once more experiments are completed. The following are our primary considerations for focusing on the two summarization tasks as the first step.
1. Summarization is a representative testbed for self-evaluation. As one of the first tasks used to demonstrate RLHF, summarization remains one of the most mature testbeds of LLM self-evaluation for long-form generation. With comprehensive scoring guidelines and human-written references, summarization fits our use case perfectly.
2. Cost. Our experiment—like most involving frontier LLMs—is costly, particularly due to the number of control tasks included in the fine-tuning experiment. Not counting the Cloud compute costs for Llama experiments, we used roughly 300 million tokens via inference API for evaluation, and 150 million tokens for fine-tuning.
3. Diversity. Although both XSUM and CNN/DailyMail are summarization tasks which follows the same evaluation protocol, the difference between them—extractive vs. abstractive summarization—improves the diversity of the evaluation and provides evidence for the generality of self-recognition/self-preference, as analyzed in Sec 3.2 and Figure 7.
> We can directly tell them which one is generated by themself and which one is not. And then ask them to evaluate which one is better.
We have conducted this exact experiment in Section 3.5, where we label the source of each piece of text and re-evaluate self-preference. This indeed leads to increase in self-preference as you hypothesized. We further experimented with using intentionally incorrect labels. This is to check if the model is capable of recognizing the real generation from textual features even when we “lie” to them in the prompt, and indeed, especially for GPT-4 on the CNN/DailyMail dataset, the model still strongly prefers its own generation (the real one) when we lie to GPT-4 that the other summary is generated by it.
Thanks again for the careful review. Hopefully this addresses your concerns. Let us know if there is any other clarification we can provide.
---
Rebuttal Comment 1.1:
Title: Thanks for the response.
Comment: Thanks for the response. I will raise my assesment. | Summary: The paper investigates the novel topic of self-preference and self-recognition in large language models (LLMs). The experiments are well-conceived, and the use of pairwise comparison alongside individual evaluation provides a solid framework for understanding these phenomena. Despite these strengths, the work suffers from significant limitations, including a lack of sufficient experimental diversity and statistical rigor, which undermine the overall impact and reliability of the findings.
Strengths: The idea is novel and addresses an important issue in the field of AI.
The use of both pairwise comparison and single input individual evaluation is a thoughtful approach that offers valuable insights into the behavior of LLMs.
The approach of fine-tuning LLMs to investigate self-recognition and self-preference is innovative and provides new insights into model behavior.
Weaknesses: The experiments are too controlled with minimal variety, lacking sufficient breadth to thoroughly explore the randomness of the models. Additional experiments, particularly with pairs that do not include self-generated text, would strengthen the findings.
Figures, especially Figure 2, are not clearly explained. The paper would benefit from more detailed descriptions to help readers understand what each figure represents.
The paper lacks comprehensive statistical results to support its claims. More robust statistical analysis is necessary to validate the findings.
The study does not consider pairs without self-created summaries, which could provide crucial insights into whether LLM preferences are genuinely self-preferential or random.
Technical Quality: 3
Clarity: 3
Questions for Authors: What do you think will happen if any text generated by the LLM is then paraphrased by using some paraphrasing tool and then calculated the self-recognition score? Do you think LLMs will still be able to identify that particular text as text generated by them?
Corrections:
Line 163: the the - > the
Line 215: need - > needed
Line 252: use - > used
Line 288: self-recognitiono -> self-recognition
Also one of the paper is cited twice, the 3rd and 4th papers are the same.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Comment: Thank you for the thoughtful feedback! We'd like to address the weaknesses and questions you brought up.
> Limited scope
Within the context of summarization and the constraints with compute budget, we maximized the coverage in the following aspects of the experiments, to the best of our ability:
1. Task diversity: by experimenting on both extractive and abstractive summarization
2. Evaluation format: pairwise and individual
3. Fine-tuning setup: in- and out-of-domain, number of training examples
4. Control tasks: to rule out as many confounders as we can
5. Ordering of options in pairwise evaluations
> Statistical significance
We perform Chi-Squared tests and confirm the statistical significance of all the following claims (p-value << 0.001):
1. LLMs demonstrate preference for their own generations disproportionately compared to humans
2. LLMs demonstrate significantly higher self-preference after fine-tuning for self-recognition
3. There is a significantly higher increase in self-recognition and self-preference from fine-tuning for self-recognition compared to fine-tuning on the control tasks.
We will update the draft with these more comprehensive details.
> Non-self-created baseline
We agree that a baseline of pairwise evaluation on non-self-created texts exclusively is a good addition, and will incorporate that in the camera-ready version. We do note that the existing pairwise results already address the ordering bias by evaluating each pair twice, with both orderings of the options. In addition, the individual measurements that demonstrate negligible self-preference suggest that the LM will likely (correctly) assign close to 50-50 when neither example in a pair is written by itself (assuming equal quality).
We appreciate your feedback on presentation of figures and will incorporate it in revision. Thank you!
---
Rebuttal Comment 1.1:
Comment: Thanks you for the considerations, with the following additions made to the camera-ready paper, I am increasing my score to 6. | Summary: The authors examine self-preference in language models through the lens of self recognition. They pose the following question: if models indeed prefer themselves, is it also because they recognize themselves? The authors explore a range of models and find correlates between self recognition and self-preference. Furthermore, the authors explore potential causal links between self-recognition and preference by finetuning models on confounding tasks (a true causal analysis is prohibitive given that a mechanistic understanding of LLMs is unavailable).
Strengths: Disentangling preference and recognition is an insightful idea! This enables the authors to explore a potential causal relationship.
Cross-checking with a human evaluation is great; so is taking into account ordering effects. It’s clear that the authors put thought into their prompting and evaluation.
I think the control tasks are quite diverse, and the paper does a good job analyzing potential confounds.
Weaknesses: Some weaknesses are formulated as questions in the questions section.
I do have one (big-ish) concern. Summarization is a limited task. The authors do mention this in the limitations of the paper—but a potential confound might be memorization of the summarization datasets (e.g. models that have seen the dataset more during training are also more likely to prefer the same dataset). I really do think it is worthwhile running preliminary expts. on other domains.
Technical Quality: 3
Clarity: 4
Questions for Authors: While the paper is well written and comprehensive, I have a few questions I'd be curious about:
1. Scaling effects: Are larger models better at self-recognition? I would’ve liked to see trends across scale. I was looking at Figure 1 and trying to understand if there was a trend (e.g. GPT 3.5 is purportedly smaller than 4), but it would’ve been nice to see Llama 70B results. If this isn’t possible due to computational constraints, I totally understand! But some hypotheses would still be nice.
2. Are differences in preferences between LLMs and human annotators statistically significant? Re: this line, the authors claim significance- is there a test that backs this up?
```But the disparity between LLMs as rated by humans is significantly lower than the level of self-preference exhibited by the LLMs, in particular GPT-4. This suggests that out of the box, the LLMs’ self-preference is disproportionate to the actual quality differences.```
3. How much of this goes away with a prompting mitigation? (e.g. include something like “Don’t be biased to your own outputs”) in the prompt? I think that would be a really interesting finding—regardless of what you find.
A small suggestion: in Figure 1, I would draw a vertical / horizontal line at x = 0.5 and y = 0.5, just to quickly see which models fall in which quadrants.
Line 175: it’s -> its
Confidence: 4
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Comment: Thank you for the thoughtful feedback! We'd like to address some concerns and questions brought up in the review.
> Limitations of running the experiments on two summarization tasks
We agree that experimenting on more tasks is worthwhile and will update the paper once more experiments are completed. The following are our primary considerations for focusing on the two summarization tasks as the first step.
1. Summarization is a representative testbed for self-evaluation. As one of the first tasks used to demonstrate RLHF, summarization remains one of the most mature testbeds of LM self-evaluation for long-form generation. With comprehensive scoring guidelines and human-written references, summarization fits our use case perfectly.
2. Cost. Our experiment—like most involving frontier LLMs—is costly, particularly due to the number of control tasks included in the fine-tuning experiment. Not counting the Cloud compute costs for Llama experiments, we used roughly 300 million tokens via inference API for evaluation, and 150 million tokens for fine-tuning.
3. Diversity. Although both XSUM and CNN/DailyMail are summarization tasks which follow the same evaluation protocol, the difference between them—extractive vs. abstractive summarization—improves the diversity of the evaluation and provides evidence for the generality of self-recognition/preference, as analyzed in Sec 3.2 and Figure 7.
> Scaling effects
Based on our experiments, larger and more capable models do appear better at self-recognition. One hypothesis is that larger models’ output probabilities are more focused around their own generations. State-of-the-art LLM detection methods compute the LLM’s perplexity on generated text vs. perturbed versions and use that gap for detection. Given evidence that suggests that detection is at least not more difficult for larger and more capable LLMs, we hypothesize that larger LLMs’ perplexity is more sensitive to whether the text is generated by the LLM. If self-recognition relies on similar mechanisms as detection methods (this is one hypothesis we are investigating), then this might explain why larger LLMs are better at self-recognition.
> Significance of difference in preference between humans and LLMs
Using the pairwise format, we run a Chi-squared test of statistical significance for the difference between self-preference (Figure 4) and human preference (Sec 2.5), and find that the difference is significant (p-value << 0.001) for GPT-4 (compared to GPT-3.5 and Llama 2) even prior to fine-tuning, and significant (p-value ~ 0.007) for GPT-3.5 after fine-tuning on 10 examples for self-recognition ability. We will update the draft with these more comprehensive details.
> Prompt sensitivity
In our initial experiments, self-recognition/self-preference seems insensitive to instructions in the prompt. We refrain from prompts like “Don’t be biased to your own outputs” in self-preference evaluation because we don’t have a good way to decouple the effect of priming the model to think that one of the inputs is from itself. For this submission, we wanted to stick to the main message so as to not confuse readers. For follow-up work we will perform more thorough prompt engineering, including giving GPT-4 a better prior of the likelihood of its own outputs showing up.
Thank you again for your time. Let us know if there is any other clarification we can provide or if you have other suggestions!
---
Rebuttal Comment 1.1:
Title: Thanks!
Comment: Thanks for the rebuttal! I'm keeping my (positive) score, since the current paper + rebuttal only has results on summarization. Still, great work!! | null | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
The Dormant Neuron Phenomenon in Multi-Agent Reinforcement Learning Value Factorization | Accept (poster) | Summary: The paper proposes ReBorn to reactivate dormant neurons in the mixing network of multiagent value-mixing algorithms. Specifically, ReBorn transfers the weights from overweight neurons to dormant neurons and this method ensures the learned action preferences are not changed after the ReBorn operation. Experiments on SMAC, predator-prey, and SMACv2 demonstrate the effectiveness of the proposed method.
Strengths: 1. The idea is novel and the motivation is well-explained.
2. Experiments on different scenarios show that ReBorn improves the performance of different baselines.
3. The authors theoretically prove that ReBorn would not affect the learned action preferences.
Weaknesses: 1. It seems that the ReBorn operation is time-consuming as it needs to compute and manipulate each neuron in the network. If the network is large, this ReBorn operation may become infeasible.
2. The ReBorn is only validated in the multiagent value-mixing algorithms.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Could the authors compare the running time of the baselines and their ReBorn variants?
2. Could ReBorn be applied to other MARL algorithms besides the multiagent value-mixing algorithms such as MAPPO and MADDPG?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: NA.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for valuable comments and suggestions, which can help improve the quality of our work. We address the concerns of the reviewer as follows.
>Weakness 1: It seems that the ReBorn operation is time-consuming as it needs to compute and manipulate each neuron in the network. If the network is large, this ReBorn operation may become infeasible.
>Question 1: Could the authors compare the running time of the baselines and their ReBorn variants?
The time complexity of the ReBorn operation increases **linearly** with the number of neurons. In the 27m_vs_30m environment, ReBorn is performed 19 times. The training time of the QMIX algorithm increases from 23.8 hours to 24.6 hours. By applying ReBorn, the win rate of QMIX increases from 39% to 51%. The performance benefit (51/39=1.31) bring by ReBorn **outweighs its computational cost** (24.6/23.8=1.03).
We have compared the running time of ReDo and Reset with ReBorn, as well as with two new baselines, SR[1] and MARR[2]. The wall clock time is depicted in the table below. As shown in the table, ReDo, Reset, and ReBorn take almost the same amount of time. Additionally, SR[1] and MARR[2] take more time than ReBorn.
|Map/QMIX|Baseline|ReSet|Redo|ReBorn|SR|MARR|
|--|--|--|--|--|--|--|
|3s_vs_5z(2M)|10.2|10.4|10.6|10.9|15.3|11.3|
|MMM2(2M)|12.3|12.4|13.1|12.9|17.4|13.6|
|27m_vs_30m(2M)|23.8|24.2|24.7|24.6|29.3|25.3|
|2c_vs_64zg(2M)|18.3|18.7|19.2|19.3|24.2|19.6|
|stag_hunt_s(1M)|3.9|3.9|4.2|4.1|6.3|4.8|
|stag_hunt_m(1M)|7.8|8.0|8.5|8.3|10.3|8.9|
|stag_hunt_l(1M)|12.7|13.1|13.4|13.6|18.6|14.2|
We agree with the reviewer that there could be better ways to reduce the computational overhead of ReBorn. For example, ReBorn could be applied only to a few layers of a neural network rather than the whole neural network. We plan to explore such alternatives in the future.
> The ReBorn is only validated in the multi-agent value-mixing algorithms.
As it is written in the title of our submission: "The Dormant Neuron Phenomenon in Multi-Agent Reinforcement Learning Value Factorization", we focus on the value factorization (value mixing) algorithms.
We have applied ReBorn to the critic networks of MADDPG and MAPPO, which are not multi-agent value-mixing algorithms. The experimental results are shown in Figure 6 of the response PDF. As it is shown, ReBorn can reduce the dormant ratio of the critic networks of MADDPG and MAPPO, and improve their performance as well. In future work, ReBorn is expected to be applied in other multi-agent fields.
References
[1] D'Oro et. al., Sample-Efficient Reinforcement Learning by Breaking the Replay Ratio Barrier, ICLR, 2023
[2] Yang et al., Sample-Efficient Multi-agent Reinforcement Learning with Reset Replay, ICML, 2024
---
Rebuttal Comment 1.1:
Comment: Thank you for your reply. Most of my concerns are addressed. I have one remaining problem.
For the new baselines [1, 2] that were compared, what are the replay ratios for them and ReBorn? It seems that the two methods are working with higher replay ratios larger than 1. Could you compare ReBorn and them at different replay ratio settings such as 1, 5, and 10?
---
Rebuttal 2:
Comment: Dear Reviewer,
We are happy that we have addressed most of your concerns.
For SR[1] and MARR[2], we use their default replay ratios. The replay ratios for SR and MARR are 4 and 1, respectively. The replay ratio for ReBorn is 1.
As per the Reviewer's suggestion, we've conducted a performance evaluation of ReBorn, SR, and MARR for QMIX on the Predata-prey environments at varying replay ratios (1, 5, and 10). Notably, the behavior of ReBorn remains consistent regardless of the replay ratio. However, for SR, the number of reset operations increases with the replay ratio, and for MARR, the number of 'random amplitude scale data augmentation' also increases with the replay ratio.
For predator-prey small environments, the return and the time (hours) for different algorithms with different replay ratios are depicted as follows. The training time increases with the increase of the replay ratio. We find that ReBorn performs better than SR and MARR when the replay ratio is lower than 5. When the replay ratio is 10, the performance of ReBorn is slightly better than SR and MARR.
|predator-prey small (time)|1|5|10|
|-|-|-|-|
|Reborn|4|9|13|
|SR|4|7|12|
|MARR|4|9|14|
|predator-prey small (return)|1|5|10|
|-|-|-|-|
|Reborn|112|115|114|
|SR|42|98|113|
|MARR|51|97|110|
For the predator-prey medium environments, the return and the time for different algorithms with different replay ratios are depicted as follows. We find that when the replay ratio is lower than 5, ReBorn performs better than SR and MARR. When the replay ratio is 10, ReBorn performs weaker than SR and MARR.
|predator-prey medium (time)|1|5|10|
|-|-|-|-|
|Reborn|7|13|20|
|SR|7|11|18|
|MARR|7|14|22|
|predator-prey medium (return)|1|5|10|
|-|-|-|-|
|Reborn|205|203|195|
|SR|78|162|203|
|MARR|89|182|208|
Based on our experimental results, it's clear that ReBorn should consider the replay ratio to achieve better performance.
References
[1] D'Oro et. al., Sample-Efficient Reinforcement Learning by Breaking the Replay Ratio Barrier, ICLR, 2023
[2] Yang et al., Sample-Efficient Multi-agent Reinforcement Learning with Reset Replay, ICML, 2024
---
Rebuttal Comment 2.1:
Comment: I appreciate the authors for replying to my question. I have no further questions and I would like to maintain my score. But I recommend including the results of baselines [1, 2] and Reborn with higher replay ratios in SMAC and predator-prey for a fair comparison in the final revision.
---
Reply to Comment 2.1.1:
Comment: Dear Reviewer,
We are happy that we have addressed your concerns. We will add the results of baselines[1,2] and Reborn with higher replay ratios in SMAC and predator-prey.
Best Regards,
Authors | Summary: This paper introduces a novel approach for tackling the plasticity loss and securing sample efficiency in multi agent reinforcement learning (MARL). Different from the methods in single agent RL, this paper also shows the importance of not violating the knowledge invariant (KI) principle in MARL. With careful analysis and extensive experiments, the proposed method, ReBorn, outperforms other baselines such as ReDo or Reset. The authors said that the baselines, ReDo and Reset, violate the KI principle, thus they are not appropriate to be applied to MARL setting. However, the proposed method, ReBorn, can satisfy the KI principle, leading to achieve remarkable performance.
Strengths: - One of the most important part in this paper, the KI principle, is well defined, and can represent the constraints in MARL setting. Due to the difficulty of satisfying this principle, applying previous methods is not straightforward. Nevertheless, the proposed method, ReBorn, does not violate this principle, and outperforms the baselines.
- Scaling the input and output weights connected to overweight neurons is a novel approach for resolving the plasticity loss problem in Rl community. This paper pointed out the drawback of ReDo in MARL scenario, and proposed a novel approach.
Weaknesses: - The motivation behind scaling the weights in ReBorn is not clear. Though scaling the weights to ease the strength of the overweight neurons is straightforward, is it only way to reduce the number of overweight neurons? I'm quite confusing why we should use the scaling approach in ReBorn.
- I think there is a missing baseline [1]. This approach can also effectively resolve the plasticity loss problem, and achieves high sample efficiency in singe agent RL. Although this approach violates the KI principle, with high replay ratio, it may achieve good performance in MARL setting. Similar to [1], a recently proposed method [2] show that resetting the network and training the network with high replay ratio in MARL setting can outperform the baselines. I know it is difficult to compare ReBorn and the method in [2] at submission phase, but I wonder the method in [2] violating the KI principle is truly failed to overcome the plasticity loss.
[1] D'Oro et. al. ,Sample-Efficient Reinforcement Learning by Breaking the Replay Ratio Barrier, ICLR, 2023
[2] Yang et. al., Sample-Efficient Multiagent Reinforcement Learning with Reset Replay, ICML, 2024
Technical Quality: 3
Clarity: 2
Questions for Authors: Already mentioned in the Weakness section.
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for your time and effort in reviewing this work. Your suggestions are valuable. We have compared our method with the two approaches mentioned by the reviewer. We address the concerns of the reviewer as follows.
>The motivation behind scaling the weights in ReBorn is not clear. Though scaling the weights to ease the strength of the overweight neurons is straightforward, is it only way to reduce the number of overweight neurons? I'm quite confusing why we should use the scaling approach in ReBorn.
We agree with the reviewer that there could be multiple ways to ease the strength of the overweight neurons. We did explore another way of distributing the weights, which set $\alpha_i$ = $1/(M+1)$, and $\beta$ = 1. As shown in Figure 4 of the response PDF, such an average weight-sharing method performs inferior to ReBorn; its performance is weaker than the weight scaling approach. We would like to explore the method used by [2], which uses a Shrink & Perturb strategy.
>I think there is a missing baseline [1]. This approach can also effectively resolve the plasticity loss problem and achieve high sample efficiency in single-agent RL. Although this approach violates the KI principle, with a high replay ratio, it may achieve good performance in the MARL setting. Similar to [1], a recently proposed method [2] shows that resetting the network and training the network with a high replay ratio in the MARL setting can outperform the baselines. I know it is difficult to compare ReBorn and the method in [2] at the submission phase, but I wonder if the method in [2] violating the KI principle truly failed to overcome the plasticity loss.
We will add the following discussion to the related work section. *[1] increases replay ratio and resets all the parameters of networks based on the number of updates. For MARL, [2] periodically resets the parameters of agent networks and uses data augmentation to further increase the replay ratio.*
We have evaluated the performance of [1] and [2] in four more environments. The results are depicted in Figure 2 of the response PDF. As shown in Figure 2, these two methods perform inferior to ReBorn. We will add these experimental results to our work.
We agree with the reviewer that the KI principle offers a perspective on the plasticity loss problem. There could be more perspectives on this issue.
References
[1] D'Oro et. al., Sample-Efficient Reinforcement Learning by Breaking the Replay Ratio Barrier, ICLR, 2023
[2] Yang et al., Sample-Efficient Multiagent Reinforcement Learning with Reset Replay, ICML, 2024
---
Rebuttal 2:
Comment: Dear Reviewer,
Thanks for your time and effort in reviewing our work. We greatly appreciate your insightful and detailed feedback. We have carefully addressed the concerns in the rebuttal. Please let us know if any aspects remain unclear, and we are happy to provide further clarification.
Best regards,
Authors | Summary: The paper explores the dormant neuron phenomenon, an active research topic in RL, in the context of MARL value factorization. It describes how dormant neurons manifest mostly in the mixing network, details their effect on performance and connects them to an opposite but correlated phenomenon, overweight neurons. The paper proposes a method to counter the dormant neuron phenomenon by redistributing the weights of overweight neurons among dormant neurons. In MARL value factorization, the proposed method is demonstrated to outperform general RL methods for dealing with dormant neurons.
Strengths: The paper text is easy to read and understand, and the problem the paper investigates is important and topical. Evaluation shows that, on two environments, the proposed method (ReBorn) is clearly better than general RL methods used to solve this problem. The discussion on the existence of overweight neurons is also significant for the field outside of the specific problem setting.
Weaknesses: The figures and figure captions are often hard to understand. Some plots are hard to interpret solely from the plot and caption text, e.g. to understand what 'Number' means in Figure 2a, one must look for explanations in the text body. Another example is Figure 4, which the reader is referred to for a depiction of the main method of the paper, but contains a total of 8 words in legends and caption combined. In contrast, the figures themselves are visually clear to understand.
This problem can be easily fixed by expanding some of the captions and using more expressive wording in figure legends and titles. I am willing to raise my score if the figures are presented in a more readable manner.
Technical Quality: 4
Clarity: 2
Questions for Authors: Suggestion:
Evaluation on more environments could strengthen the paper's claim of superiority over other methods. However, I find the current evaluation experiments sufficient to make that claim.
Confidence: 3
Soundness: 4
Presentation: 2
Contribution: 4
Limitations: Limitation are not discussed in the main section, only in the appendix.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to thank the reviewer's time and effort to review this paper. We will improve the quality of the figures and add more experimental results to our work.
>The figures and figure captions are often hard to understand.
Thanks for your suggestion on Figure 2a and Figure 4. We have improved the presentation of these two figures. They are presented in Figures 1 and 3 in the response PDF. Additionally, we have improved the readability of Figure 2(b), Figure 2(c), Figure 3(a-c), Figure 5(a-c), Figure 6 (a-c), Figure 7 (a-c), and Figure 8 (a-c) as follows.
For Figure 2a, we have changed its vertical label from "Loss" to "MSE Loss for a Mixing Network" and changed its legend from "Number" to "Number of Dormant Neurons". Further, we have also updated the caption of the Figure from "Dormant neurons hurt mixing network expressivity" to "The MSE Loss for fitting a simple Mixing Network increases with an increasing number of Dormant Neurons. It indicates that dormant neurons hurt mixing network expressivity."
For Figure 4, we have updated the caption of the Figure from "ReBorn Method" to "The procedure of ReBorn neurons. The weights of overweight neurons are distributed to $M$ randomly picked dormant neurons. $w_x^{in}$, $w_x^{out}$, and $b$ are the input weights, output weight, and bias for an overweight neuron. $w_i^{in}$, $w_i^{out}$ and $b_i$ are the input weights, output weight, and bias for dormant neuron $i$. After Reborn, the input weights and bias for a dormant neuron $i$ becomes $\beta_i\alpha_iw_x^{in}$, and $\beta_ib_x$, where $0\leq \alpha_i \leq 1, and \sum_{i=0}^M\alpha_i=1$, $\beta_i$ is a random number among [0.5, 1.5]."
For Figure 2(b), we have relocated the legends from the top-left to the bottom-right and changed the legend from "Interval" to "Update Interval."
For Figure 2(c), we have changed its vertical label from "Percent" to "NAS Percentage", and changed its horizontal label from "Score ranking" to "NAS ranking for top-25 neurons". We have updated its caption from "Overweight neurons in the QMIX mixing network" to "The Normalized Activation Score (NAS) percentage ranking for top-25 overweight neurons in the QMIX mixing network."
For Figure 3, we have updated the caption to "Overweight neurons in QMIX mixing networks: (a) The percentage contribution of the number of dormant neurons (depicted as Dormant), the number of overweight neurons (depicted as Overweight-Number), the sum of NAS (depicted as Overweight-Sum) for overweight neurons over time. (b) Overlap coefficients for Dormant/Overweight neurons between the current iteration and previous iterations. (c) Percentage of dormant neurons that re-enter dormancy after ReDo within different time steps."
For Figure 5 (a-c), we have changed the caption from "ReBorn can improve the performance of various value factorization algorithms" to "ReBorn can improve the performance of various value factorization algorithms: (a-b) the test win rate for the 3s5z\_vs\_3s6z and the MMM2 environments, (c) the return for predator-prey small environment, (d-e) the dormant percent for the the 3s5z\_vs\_3s6z, the MMM2, and the predator-prey small environment."
For Figure 6 (a-c), we have updated the caption from "Comparison with other Parameter Perturbing Methods." to "Comparison with other Parameter Perturbing Methods: (a-c) The test win rate, the dormant percentage and the percentage of the sum of normalized activation score (NAS) for the MMM2 environment. (d-f) The test win rate, the dormant percentage, and the percentage of the sum of NAS for the 27m\_vs\_30m environment."
For Figure 7 (a-c), we have changed the caption from "Importance of satisfying the KI Principle" to "Importance of satisfying the KI Principle for (a) QMIX, (b) QPLEX, and (C) RMIX. A variant of ReBorn without satisfying the KI Principle is depicted as Reborn w/o KI."
For Figure 8 (a-c), we have changed the caption from "Comparison with other methods that satisfy the KI principle" to "Comparison with other methods that satisfy the KI principle for (a) QMIX, (b) QPLEX, and (C) RMIX."
>Evaluation on more environments could strengthen the paper's claim of superiority over other methods. However, I find the current evaluation experiments sufficient to make that claim.
We have evaluated ReBorn against ReDo and Reset in four more environments. The experimental results are shown in Figure 2 in the response PDF. The results demonstrate that ReBorn performs better than these two methods.
>Limitation are not discussed in the main section, only in the appendix.
Thank you for your valuable suggestion. We will incorporate the discussion of the limitations in the main section.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response. Since you have addressed my concerns, I will raise my score to 7. | Summary: This work proposes a new method for resetting dormant neurons in multi-agent RL settings. It replicates previous observations on unit dormancy in deep RL in the multi-agent regime, finding that inactive neurons are correlated with reduced ability to improve performance. The proposed method differs from ReDO, which resets dormant neurons' incoming weights to that of a random initialization, by distributing the weights of highly active "overweight" neurons among those identified as dormant during reset steps so as to avoid changing the output of the network.
Strengths: - The paper identifies additional factors, beyond those present in single-agent deep RL, which impact the dormancy rate in a neural network, in particular showing that in QMIX and QPLEX algorithms, the *mixing* network is most vulnerable to dormancy. It also shows that increasing the number of agents, which presumably also increases the degree of nonstationarity in the problem, also increases the number of dormant units.
- The knowledge invariant principle, while not entirely novel (see, e.g. the approach of Nikishin et al. which is precisely motivated by the desire to avoid changing the network's outputs after a reset), is clearly relevant for multi-agent settings, where changing many agents' policies at once introduces greater instability than the (potentially beneficial) exploratory interpretation of parameter perturbations in single-agent algorithms.
- The observation that the accumulation of dormant neurons is accompanied by accumulation of 'overweight' neurons is sensible and points to an interesting alternative approach to maintaining plasticity: rather than focusing on resetting units which are not receiving gradients, it may be beneficial to re-distribute utility across the units in a layer. The proposed solution makes a lot of sense from this framing.
Weaknesses: I have two main concerns with this paper which prevent me from confidently recommending its acceptance: the first is the validity of the theoretical results, and the second is signficance of the empirical results. I list more minor concerns later, which may benefit the paper but which will not affect my decision.
- Major: while it is true that a parameter perturbation which changes the rankings of global actions can cause a function which previously satisfied the IGM principle to violate it, such a perturbation would also violate the assumption on the monotonicity of the global value w.r.t. agent utilities which is baked into methods like QMIX. Indeed, the proof of Theorem 1 does not provide a concrete example of a perturbation to the network parameters which violate the KI principle and violate the IGM principle, as such an example which satisfies [B.5] would require non-monotonicity of the mixing function. Since the algorithms studied in the paper do use monotone mixing functions, Theorem 1 does not seem relevant.
- Major: I could not find anything in the proof of Theorem 2 which depends on the particular form of the ReBorn update. Instead it seems that Theorem 2 depends on the monotonicity of the QMIX mixing network (b.10), rather than any particular property of the perturbation to $\hat{\theta}$. This further reinforces the above concern that the proposed failing of parameter perturbation methods in the general case of Theorem 1 is in fact only a product of relaxing the assumption on the monotonicity of the mixing network.
- Major: In many instances, using ReDo appears to hurt performance compared to naive QMIX, despite reducing the dormancy percentage. Further, it appears that in some environments, applying ReDo does not change the number of dormant neurons. This causes me to wonder if the ReDo method has been appropriately tuned for the domains it is being applied to.
- Minor: to me, a better name for the so-called 'overweight' neurons would be 'over-active', since it's not clear that the cause of their disproportionate activity is due to the magnitude of their weights or to the alignment of their incoming weights with the previous layer's features. This also makes it more clear their relationship with dormant neurons.
- Minor: there are a few grammatical issues in the paper that would benefit from review. For example, the word 'albeit' is typically used at the start of a phrase, and not a clause as is often done in the paper, where it can generally be replaced with 'although'.
Technical Quality: 2
Clarity: 3
Questions for Authors: 1. Can the authors comment on how they tuned ReDo and give some insight into why it seems to not have any effect on the number of dormant units in Figure 6?
2. Please address my questions regarding theorems 1 and 2. Are there instances where even with a monotone mixing network, a parameter-perturbing method will fail to satisfy IGM?
Confidence: 4
Soundness: 2
Presentation: 3
Contribution: 3
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your valuable comments, we will improve our work based on your suggestions. We address your concerns as follows.
>The knowledge invariant principle, while not entirely novel (see, e.g., the approach of Nikishin et al., which is precisely motivated by the desire to avoid changing the network's outputs after a reset)
The approach of Nikishin et al. is the Reset method used for comparison. It aims to preserve the experience within the replay buffer and the un-reset layers. However, it does not try to avoid changing the output of network.
>Major 1: while it is true that a parameter perturbation which changes the rankings of global actions, ... not provide a concrete example of a perturbation to the network parameters which violate the KI principle and violate the IGM principle, ... require non-monotonicity ... Theorem 1 does not seem relevant.
Here, we present an example that violates the KI and IGM principles. Assuming we have trained a QMIX-like value factorization method $f$ that implements $Q_{tot}=\sum_{i=1}^Nk_i\times Q_i$, where $k_i \ge 0$. It is implemented by a two layer-neural network parameterized by $k_i$. In the last layer, there is only one neuron (output neuron). For each neuron $i$ in the first layer, its input is individual utility $Q_i$, and the weight connected to the output neuron is $k_i \ge 0$. The activation function of the output neuron is an identity function. Clearly, $f$ is a monotonic increasing network, satisfying the IGM principle. If a parameter perturbing method changes all the weight $k_i$ to be smaller than zero, then $f$ does not satisfy the IGM principle, and the KI principle is violated.
We agree with the reviewer that *[B.5] would require non-monotonicity of the mixing function*. We have shown that a perturbing method could change a mixing function from monotonic increasing to monotonic decreasing.
Besides monotonic increasing mixing functions (e.g., QMIX and RMIX), ReBorn supports both DMIX and QPLEX. DMIX has a **non-monotonic increasing** mixer for value distributions and QPLEX can model **non-monotonic** relationships. We implement ReBorn on the hypernet for QMIX, on the QMIX part of DMIX, and advantage part of QPLEX to ensure that we do not change their functional relationships. We will make this clearer.
>Questions 2: Are there instances where even with a monotone mixing network, a parameter-perturbing method will fail to satisfy IGM?
If a parameter-perturbing method changes the functional property of the mixing network, it will fail to satisfy IGM. If the functional property is preserved, it will not fail to satisfy IGM.
>Major 2: I could not find anything in the proof of Theorem 2 which depends on the particular form of the ReBorn update. ... only a product of relaxing the assumption on the monotonicity of the mixing network.
The proof of Theorem 2 does depend on the implementation of ReBorn on QMIX. We will make it clearer in the paper. We apply ReBorn to the neurons of hypernetworks used in QMIX. Hypernetworks are used to generate non-negative weights of the mixing network. A hypernetwork takes $\tau$ as input and outputs the weights of a layer of the QMIX mixing network. A hypernetwork consists of a single linear layer, followed by an activation function, which ensures that the mixing weights are non-negative after ReBorn.
After applying ReBorn, the monotonic increasing property of QMIX does not change. We do not perturb the agent network, as we find that the dormant ratio is low in the agent network. In Sec 6.4.1, we apply ReBorn both to the mixing network and the agent network. This leads to a violation of the KI principle. As shown in Fig. 7, depicted as ReBorn w/o KI, applying it only to the mixing network leads to better performance.
>Major 3: In many instances, using ReDo appears to hurt performance compared to naive QMIX, despite reducing the dormancy percentage ... applying ReDo does not ... dormant neurons ... if the ReDo method has been appropriately tuned ...
>Question 1: Can the authors comment on how they tuned ReDo and give some insight into why it seems to not have any effect on the number of dormant units in Figure 6?
**For Figure 6, ReDo can reduce dormant ratios.** The middle of Figure 6 (b, e) depicts the dormant ratios for different methods. The results for ReDo (QMIX-ReDo) are depicted as a blue curve, showing that it can reduce the dormant ratio for the QMIX algorithm (depicted as a green curve).
ReDo can **reduce the dormant ratio** for all the cases shown in this work. Please refer to all the figures (Fig. 6(b, f), Appendix Fig. 5 middle, Appendix Fig. 7 bottom row) that depict the dormant ratio for Redo. Regarding performance, ReDo hurts the performance of QMIX on MMM2, but it increases its performance for 27m-vs-30m, which has a higher dormant ratio than MMM2.
For a fair comparison, we performed parameter turning on ReDo for 2 SMAC environments (MMM2 and 5m vs 6m) based on ReDo's default parameters, which perform well for 47 single agent environments. We explored two parameter re-initialization methods (Xavier and Kaiming) and two dormant thresholds (0.025 and 0.1). The results are depicted in Figure 5 of the response PDF. Different configurations perform similarly. In the end, we choose the default parameters of ReDo.
As we discussed in Sec. 4, the presence of over-active neurons impacts the existence of dormant neurons. Therefore, ReDo's low efficiency in reducing dormant neurons might be due to its inability to handle over-active neurons. This is a factor contributing to the effectiveness of ReBorn, which addresses both dormant and over-active neurons.
>Minor: a better name for the so-called 'overweight' neurons would be 'over-active'.
We will change the word "overweight" to "over-active".
>Minor: there are a few grammatical issues in the paper ...
We will change the 'albeit' in a clause to 'although'. We will check the paper carefully to improve its overall quality.
---
Rebuttal 2:
Comment: Thanks to the authors for their response.
- **Nikishin et al. citation:** Apologies for the ambiguity, Nikishin et al. reference I referred to was not the primacy bias work but rather the later paper on plasticity injection [1], which proposes to freeze the network parameters and initialize a new, trainable network whose output is added to that of the frozen network in order to increase trainability.
- **Monotonicity:** I appreciate the provided example, however this example also seems to depend on an unconstrained mixing network in which there exist parameters under which the output is non-monotonic in the inputs. If I understood correctly, however, QMIX and related methods explicitly constrain their mixing networks to be monotonic under all possible parameterizations. The major aspect of my concern which is still unclear after the authors' response is what property of ReBorn ensures that it will satisfy the KI / IGM principle in a situation where ReDo does not. Providing a concrete worked example of such a case would help to address this concern.
- Could the authors clarify what they mean by “functional property” in their response?
- **ReDo baseline:** While I agree with the authors that ReDo is reducing the dormant neuron ratio relative to the baseline, the specific feature that drew my attention in Figure 6 was the relatively flat slope of the dormant neuron curve in the second half of training in the bottom middle panel, where it seems like each time the ReDo algorithm is called later in training it does not even transiently reduce the number of dormant neurons. This result would benefit from a brief discussion of why applying ReDo is not even temporarily reducing the number of dormant neurons.
[1] Nikishin, Evgenii, et al. "Deep reinforcement learning with plasticity injection." Advances in Neural Information Processing Systems 36 (2024).
---
Rebuttal 3:
Comment: Dear Reviewer,
Thanks for your reply. We address your concerns as follows.
>Nikishin et al. citation: Apologies for the ambiguity, Nikishin et al. reference I referred to was not the primacy bias work but rather the later paper on plasticity injection [1], which proposes to freeze the network parameters and initialize a new, trainable network whose output is added to that of the frozen network in order to increase trainability.
Thanks for the clarification. Plasticity injection (PI) is an interesting work. It aims to increase the plasticity without changing the neural network output by adding new parameters. Unlike PI, our work does not introduce new parameters. We will discuss it in the related work section.
We have applied PI in the agent network (Agent+PI), in the mixing network (Mixer+PI), and in both the agent and mixing network (Agent+Mixer+PI) for QMIX in the predator-prey small, predator-prey medium, MMM2 environments. For these experiments, we use the default hyperparameters of PI. The experimental results are listed as follows. The return is reported for predator-prey, and for MMM2, the win rate is reported. The experimental results show that PI performs inferior to ReBorn.
|QMIX|predator-prey small|predator-prey medium|MMM2|
|-|-|-|-|
|Agent+PI|106|173|46|
|Mixer+PI|67|114|64|
|Agent+Mixer+PI|73|125|44|
|Reborn|112|205|83|
>Monotonicity: I appreciate the provided example, ... The major aspect of my concern which is still unclear after the authors' response is what property of ReBorn ensures that it will satisfy the KI / IGM principle in a situation where ReDo does not. Providing a concrete worked example of such a case would help to address this concern.
Thanks for your question; we will make our writing clearer. We will discuss different variants of ReDo more clearly, and include related experiments.
In single-agent RL, ReDo perturbs the parameters of the agent network. For MARL, ReDo perturbs the agent and mixing networks through functions $h$ and $g$, respectively. $h$ and $g$ are the same; they both use Xavier initialization to re-initialize the dormant neurons. As the parameters of the agent network are perturbed through $h$, **the local ranking of actions could be changed, and so does the global ranking**; thus, we state that it does not guarantee to satisfy the KI principle.
In Section 6.4.2, we replace $h$ used by ReDo with an identity function $h(\theta) =\theta$ used by ReBorn, which **does not perturb agent network parameters**. The new method is named ReBorn(ReDo). We have stated in Section 6.4.2 that ReBorn(ReDo) **satisfies the KI principle**. The experimental results show that ReBorn(ReDo) performs inferior to Reborn (Figure 8). Additionally, we apply ReDo to the agent network only by using the function $g$ for the agent network and the identity function $h$ for the mixing network. In the following table, we show the result for using the $g$ function only for the agent network (Agent-ReDo), for the mixing network (Mixer-Redo), for both the agent network and the mixer network (ReDo), and ReBorn. Mixer-Redo is called ReBorn(ReDo) in Figure 8. For predator-prey small and predator-prey medium, the return is reported. For MMM2, the win rate is reported.
|QMIX|predator-prey small|predator-prey medium|MMM2|
|-|-|-|-|
|Agent-ReDo|88|165|46|
|Mixer-ReDo|105|184|59|
|ReDo|92|177|55|
|ReBorn|112|205|83|
For Agent-ReDo and Mixer-ReDo, the identity function $h$ is applied to the mixing and agent networks, respectively. We find that by using $g$ only to the agent network (Agent-ReDo), its performance is poor due to the violation of KI and the existence of dormant neurons, mainly in the mixing network. Mixer-ReDo performs better than Agent-ReDo and ReDo, thanks to the satisfaction of the KI principle. However, all these variants of ReDo perform inferior to Reborn.
The $h$ and $g$ functions used by ReBorn are different from ReDo. As it is written in lines 226-227, $h$ is an identity function used for the agent network, and $g$ is applied for the mixing network through weight-sharing among over-active and dormant neurons. ReBorn perturbs the parameters of the mixing network rather than the agent network, thanks to the discovery that there exist few dormant neurons in the agent network. For ReBorn, as the agent network is not perturbed, **the local ranking of action is not changed, so does the global ranking of actions for a monotonic mixer**. We have stated in line 287 and 289 that, *if we applied the ReBorn function $g$ to the agent network, the KI principle will be violated*, and lead to poor performance in Section 6.4.1 (Figure 7).
---
Rebuttal 4:
Comment: >Could the authors clarify what they mean by “functional property” in their response?
Functional property means the monotonicity or the constraints for a value factorization method to satisfy the IGM principle.
For some mixers (e.g., QTRAN), although they follow the IGM principle, they may fail to satisfy it due to parameter perturbation. For QTRAN, the constraints (Theorem 1, Formula 4b in [1]) that guarantee the IGM principle are implemented through a mean-square loss, not through neural network features (such as absolute activation function). Through parameter perturbing, the IGM principle for QTRAN could be violated.
>ReDo baseline: While I agree with the authors that ReDo is reducing the dormant neuron ratio relative to the baseline, the specific feature that drew my attention in Figure 6 was the relatively flat slope of the dormant neuron curve in the second half of training in the bottom middle panel, where it seems like each time the ReDo algorithm is called later in training it does not even transiently reduce the number of dormant neurons. This result would benefit from a brief discussion of why applying ReDo is not even temporarily reducing the number of dormant neurons.
ReDo is called every 0.2 million (roundly) steps, and we report the dormant ratio every 10,000 (roundly) steps. **Most of the time when ReDo is performed, the dormant ratio before ReDo is reported, not the ratio just after ReDo**. The middle top of Figure 6 depicts the ratio for MMM2. The ratio curve does experience a temporary drop after ReDo is called.
The middle bottom of Figure 6 depicts the ratio for the 27m vs 30m scenario, which has more agents (27 agents) than MMM2 (12 agents). It is more non-stationary with a higher dormant ratio than MMM2. **When ReDo is called, the dormant ratio does drop immediately, but it increases quickly to its previous value within the next time window when the ratio is reported.** Thus, a temporary drop in the ratio is not observed. The following table shows the dormant ratio with respect to time steps for one run of ReDo on 27m vs 30m. For this table, "Report" indicates the dormant ratio is reported in a figure; "ReDo" indicates that ReDo is performed. For step 1796103, before ReDo is called, the dormant ratio is 30. At step 1796226, just after ReDo, the dormant ratio drops from 30 to 9. **This indicates that ReDo does work as it is designed.** Such ratio quickly increases from 9 to 29 at step 1806262, when the ratio is reported in a figure. Thus, such a temporary drop is not observed.
|Time step|Dormant ratio|Action|
|-|-|-|
|1786065|31|Report|
|...|...|...|
|1796103|30|Report|
|1796226|9|ReDo|
|1796300|10||
|1796480|16||
|1796518|15||
|...|...|...|
|1801093|21||
|...|...|...|
|1806262|29|Report|
Reference
[1] Son et al. QTRAN: Learning to Factorize with Transformation for Cooperative Multi-Agent Reinforcement learning, ICML 2019 | Rebuttal 1:
Rebuttal: We sincerely thank all reviewers for their insightful comments and valuable feedback. The reviewers acknowledge our work as novel (KvCg, JB6L, WpXH), important (hhg9, JB6L), significant outside the specific problem setting (hhg9), theoretical contributions (KvCg, JB6L, WpXH), good empirical results (hhg9, JB6L, WpXH). We will incorporate the suggestions and address the concerns in the new version of our work. We have conducted 56 additional experiments to address the comments, and their results are included in the response PDF. We describe the figures as follows.
**Experimental Results**
Fig. 1: We have re-designed the figure and improved its caption to make it more readable.
Fig. 2: Comparing ReBorn with more new baseline[1][2], Reset, ReDo. The experimental results show that ReBorn performs better than them.
Fig. 3: The MSE Loss for fitting a simple Mixing Network increases with an increasing number of Dormant Neurons. We have improved the legend and the caption of the figure to enhance readability.
Fig. 4: Replacing the weight scaling strategy used in ReBorn with an average weight sharing strategy. The experimental results indicate that the weight scaling strategy performs better.
Fig. 5: Hyper-parameter tuning for the ReDo method. These Hyper-parameters exhibit similar performance.
Fig. 6: Applying ReBorn to the critic network in MADDPG and MAPPO. ReBorn can reduce the dormant ratio of the critic network and improve its performance as well.
**References**
[1] D'Oro et al., Sample-Efficient Reinforcement Learning by Breaking the Replay Ratio Barrier, ICLR, 2023
[2] Yang et al., Sample-Efficient Multiagent Reinforcement Learning with Reset Replay, ICML, 2024
Pdf: /pdf/4088bc9b4a197b0425734d6b8d0ac585a62a04d6.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Aligning Embeddings and Geometric Random Graphs: Informational Results and Computational Approaches for the Procrustes-Wasserstein Problem | Accept (poster) | Summary: The authors consider the alignment problem between a point cloud generated from a Gaussian distribution and its noisy version under orthogonal transformation, formulated as the Procrustes-Wasserstein problem. The authors derive information-theoretic results for both high and low dimensional regime (i.e., bounds on the maximum likelihood estimators to the optimal one). Additionally, the authors also propose an alternating algorithm, namely Ping-Pong algorithm, which is a variant of the algorithm in Grave et al. (2019). Empirically, the authors show advantages of the proposed algorithm over other baselines.
Strengths: + The authors derive the upper bound (or lower bound) of the maximum likelihood estimator to the optimal one for the Procrustes-Wasserstein (PW) problem in both high and low dimensional regime.
+ The authors propose a variant alternating algorithm, namely Ping-Pong for the PW problem, and illustrate its advantages over other baselines.
Weaknesses: + The title seems misleading. It is not clear about its relation with random graphs.
+ The motivation of the considered problem seems weak. It is better to elaborate the problem. (e.g., why it is interesting to consider point clouds generated from a Gaussian distribution and its noisy version under orthogonal transformation)
+ It is not clear about the advantage of the proposed Ping-Pong algorithm over the alternating algorithm considered in Grave et al. (2019). It is better to deepen analysis and empirical evidences.
Technical Quality: 2
Clarity: 2
Questions for Authors: It is interesting to generalize the problem considered in Kunisky and Niles-Weed (2022) into the considered Procrustes-Wasserstein problem. I have some following concerns:
+ The title seems misleading. Could the authors comment on the relation between the considered problems in line 29-39 with random graphs in the title?
+ Could the authors elaborate why it is interesting to consider Procrustes-Wasserstein problem between a point cloud generated from Gaussian distribution and its noisy version under orthogonal transformation?
+ For the high dimensional regime, why do we need lower-bound on \pi, but upper bound on Q? How’s about the upper-bound on \pi, and lower-bound on Q – do they have some roles in the considered problem setting?
+ For the Algorithm 1, why is it possible to fix T, and K? Does the algorithm converge? Is it possible to have some reasonable stopping conditions?
+ Could the authors clarify why the algorithm 1 is superior to the algorithm in Grave et al. (2019)? It is better to deepen the analysis and illustrate with rigorous empirical evidence? E.g., with large number of data points (e.g., in Figure 1, it shows for n <= 200).
+ In line 104-108, the authors emphasize the difference with other related works. Additionally, as in Figure 1, the authors also consider the case where d=2 and d=5, it is not clear why these approaches are not considered as baselines (as claimed in line 102-103). Could the authors clarify it?
+ When Q*= I, could the authors compare the presented results with those in Kunisky and Niles-Weed (2022)?
+ The Procrustes-Wasserstein problem is a non-convex problem. It is better to consider its initialization problem as well. It is not clear about the importance of the one-step analysis for Ping-pong algorithm in the context of nonconvex optimization? Could the authors clarify it?
---------
The rebuttal has addressed some of my concerns, I increase the score to 5.
Confidence: 3
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: It seems there is no discussion on the limitation.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the time spent reviewing and all the remarks and questions that will help clarify our paper. We make sure to address the concerns raised below.
*The title seems misleading. Could the authors comment on the relation between the considered problems in line 29-39 with random graphs in the title?* There is a strong connection between the PW (Procrustes-Wasserstein) problem and GGA (geometric graph alignment), as we argue in our paper.
In GGA, we observe two complete random graphs and try to recover an underlying node correspondence $P^*$. The randomness in this model comes from edge weights which are induced by a Gaussian model: The weights are scalar products between unobserved gaussian random vectors. Note that applying an orthogonal map $Q^*$ to all the vectors leaves the scalar products invariant.
In PW on the other hand, we directly observe the vectors and want to recover both $P^*$ and $Q^*$. As described in section 'Geometric graph alignment' of the introduction, the two problems (GGA and PW) are equivalent. This is formalized in Lemma 1, for which a proof is given in Appendix A.
To make it clearer, we propose to highlight Lemma 1 more in a revised version since it clarifies the title.
*Could the authors elaborate why it is interesting to consider Procrustes-Wasserstein problem between a point cloud generated from Gaussian distribution and its noisy version under orthogonal transformation?*
This is a fundamental question which can be answered as follows. Many learning problems on real-world data can be phrased as optimization problems, which are often difficult in their worst case formulation, meaning the case where *any* instance of the problem is considered. A common approach to understanding a problem's hardness and deriving algorithmic guarantees is to consider the *planted version* of these problems, that is when data has more structure. This structure stems from a signal (here: an underlying alignement $P^*$ and an orthogonal transformation $Q^*$), and noise (here: Gaussian noise). The benefits of this approach is that we have a precise understanding of why the problem is easy or difficult based on the signal-to-noise ratio $\sigma$. For a broad introduction on planted problems, we refer to the survey on community detection by C. Moore, https://arxiv.org/pdf/1702.00467, page 5.
*For the high dimensional regime, why do we need lower-bound on $\pi$, but upper bound on $Q$? How’s about the upper-bound on $\pi$, and lower-bound on $Q$ – do they have some roles in the considered problem setting?*
An upper bound on $\ell^2$ (for $Q$) and a lower bound on the overlap of $\pi$ point in the same direction: To show that recovery is possible, a larger overlap is better and a lower $\ell^2$-loss is better as well.
*For the Algorithm 1, why is it possible to fix T, and K? Does the algorithm converge? Is it possible to have some reasonable stopping conditions?*
It is possible to fix $K,T$ arbitrarily, depending on the computation power available.
Empirically, the algorithm converges after $O(1)$ iterations.
The smaller the signal-to-noise ratio, the larger the number of required iteration.
We show in Proposition 1 that if $\sigma$ is small enough, $K,T=1$ is already sufficient.
*Could the authors clarify why the algorithm 1 is superior to the algorithm in Grave et al. (2019)? It is better to deepen the analysis and illustrate with rigorous empirical evidence? E.g., with large number of data points (e.g., in Figure 1, it shows for n <= 200)*
Intuitively, our algorithm is better than that of Grave et al. (2019) since it is more ‘greedy' at each iteration.
Empirically, Figure 1 shows that this is indeed the case for the whole considered range of parameters. The superiority is even stronger for small signal-to-noise ratios and large dimensions.
We propose, as suggested by the reviewer, to extend our empirical analysis to larger number of data points ($n$ larger) and larger dimensions.
*In line 104-108, the authors emphasize the difference with other related works. Additionally, as in Figure 1, the authors also consider the case where d=2 and d=5, it is not clear why these approaches are not considered as baselines (as claimed in line 102-103). Could the authors clarify it?*
In figure 1, we consider $d=2$ and $d=10$. These are no baselines; they are respectively meant to capture both the small dimension ($d << \log(n)$) and large dimension ($d>> \log(n)$) settings, as our paper shows that there is a dfference between these two regimes.
*When Qstar = I, could the authors compare the presented results with those in Kunisky and Niles-Weed (2022)?*
This is the content Table 1: in particular, we show that for small dimensions, since we consider the $c^2$ transport cost, we are able to recover some signal even for $\sigma=\Omega(1)$, whereas Kunisky and Niles-Weed (2022) require much smaller signal-to-noise ratio ($\sigma<<n^{-1/d}$).
We will make this comparison clearer in the paper.
*The Procrustes-Wasserstein problem is a non-convex problem. It is better to consider its initialization problem as well. It is not clear about the importance of the one-step analysis for Ping-pong algorithm in the context of nonconvex optimization? Could the authors clarify it?*
Due to lack of space, we refer to our detailed answer in the rebuttal to Reviewer ERNU (first part).
*It seems there is no discussion on the limitation.*
We propose to discuss the limitations of our work more clearly in a new separate paragraph, such as the problems left open, negative lower bounds and sharpness of our results, a deepened analysis of our algorithm and its initialization, the existence of computational-to-statistical gaps, and planted problems as tractable proxies for more complex problems.
We hope that our rebuttal answers the reviewer's main questions. We would greatly appreciate an adjusted review score if the concerns are lifted, and remain available for questions.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for your explanation in the rebuttal. It addresses some of my concerns, and I increase the score to 5.
I have some quick questions as follows:
**(1) For GGA (geometric graph alignment)**
- Could you elaborate with further details about applying an orthogonal map $Q^*$ to all the vectors leaves the scalar products invariant? I agree about the role of $P^*$, but it is still unclear about $Q^{*}$ in the GGA?
- Do we only try to recover the node correspondence? how's about the role of edge weights in the GGA?
**(2) For the proposed Ping-Pong algorithm**
- In Grave et al. (2019), they also consider the convex relaxation and leverage the Frank-Wolfe algorithm as in the proposed algorithm. Could you comment about it? It is better in case you could give some analysis/discussion and/or empirical illustration for them?
I will further adjust the score accordingly.
---
Reply to Comment 1.1.1:
Title: Response and clarifications
Comment: Thank you for your questions.
**(1)**
In GGA, whatever the orthogonal transformation applied, the scalar product is unchanged: $\langle Qx_i, Qx_j\rangle = \langle x_i,x_j\rangle$.
We thus only seek to retrieve the permutation, since the GGA problem is invariant by orthogonal transformation.
As for the role of edge weights, it is unclear for us what the question is: our point above is that the edge weights observed are invariant under orthogonal transformations.
**(2)**
Grave et al indeed use the same initialization.
Our paper distinguishes itself from their work in two ways. The first is in the methodology: the *amplification* step is more greedy and thus more efficient, while having the same computational cost. The second is that we propose a 1-step analysis, paving the way for a more general analysis. | Summary: This paper studies the Procrustes-Wasserstein problem that aims to match two high-dimensional point clouds where one is a noisy version of the other up to an orthogonal transformation. The authors establish information-theoretic results in the high ($d \gg \log n$) and low ($d \ll \log n$) dimensional regimes. Further, the authors also propose a "Ping-Pong algorithm" that alternatively estimates the orthogonal transformation and the matching. Sufficient conditions for the method to recover the planted signal after one step is provided. The theoretical finds are also supported by numerical experiments.
Strengths: 1. This paper defines a planted model for the Procrustes-Wasserstein problem that extends the work of Kunisky and Niles-Weed [2022] and Wang et al. [2022].
2. Focusing on the $L_2$ transport cost between the point clouds, in contrast to previous works that mostly consider the overlap, the authors established information-theoretic limits in the low-dimensional regime ($d \ll \log n$), which substantially differ from those of Wang et al. [2022], and in the high-dimensional regime ($d \gg \log n$),
which was not explored before.
3. A "Ping-Pong Algorithm", first initialized by a Franke-Wolfe convex relaxation, then alternatively estimating the orthogonal transformation and the relabeling, is proposed. Statistical guarantees for *one single step* of the algorithm is analyzed.
Weaknesses: 1. Due to technical challenges, only guarantees for one step of the proposed algorithm is analyzed.
2. The recovery guarantees are only provided in the overlap metric rather than the $c^2$ loss which is claimed to behave very differently when $d$ is small.
3. The dependency on the noise parameter $\sigma$ in the statistical rates seems to be different from that of the ML estimators. The tightness of the results in terms of $\sigma$ is unknown.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. How do the statistical guarantees for the proposed algorithm compare to the information-theoretic results? It would be nice if the authors could remark on this.
2. Are there any negative information-theoretic results, i.e., lower bounds on the costs, for the Procrustes-Wasserstein Problem of the planted model?
3. Can there possibly be computational-statistical gaps for the problem?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors addressed most limitations of the work. Some additional ones are highlighted in Weaknesses and Questions sections.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate the reviewer's feedback and questions, that will help improve the clarity of the paper. We answer the questions raised below.
*Due to technical challenges, only guarantees for one step of the proposed algorithm is analyzed.*
This is indeed true, our analysis of the Ping-pong algorithm is indeed limited to a 1-step version. As acknowledged in the paper, we did not manage to analyze the Ping-pong algorithm in a more detailed way for two reasons: *(i)* the initialization is a hard problem to study (as we argue in the beginning of Section 3.2, there are no guarantees for the relaxed QAP) and *(ii)* the alternative minimization steps eluded our analysis so far due to the non-convexity of the minimization steps.
However, it is to be noted that unlike previous results for relaxed QAP (such as for instance Valdivia and Tyagi, 2023), Proposition 1 offers recovery guarantees for non-null noise (even though the noise is required to be small enough).
Deepening our understanding of the Ping-pong algorithm is a challenging work in progress.
*The recovery guarantees [of the Ping-pong algorithm] are only provided in the overlap metric rather than the loss which is claimed to behave very differently when $d$ is small.*
For small dimensions, the overlap metric and the $c^2$ transport cost indeed behave very diffently: a guarantee in terms of $c^2$ cost cannot be translated into a guarantee in terms of overlap.
However, even for small dimensions, guarantees in terms of overlap metric (as the one of Proposition 1) can be translated in terms of transport cost, since we always have $c^2(\pi,\pi') \leq \max_{i,j} \Vert x_i-x_j\Vert_2^2 \times (1-\text{overlap}(\pi,\pi'))$.
For large dimensions, we have $\max_{i,j}\Vert x_i-x_j\Vert_2^2 =O(d)$ whp, while for small dimensions we have $\max_{i,j}\Vert x_i-x_j \Vert_2^2 =O(d\log(n))$ whp.
We will add such a discussion in a revised version.
*The dependency on the noise parameter in the statistical rates seems to be different from that of the ML estimators.*
This assertion is true: The ML estimator as in Wang et al., 2022 has guarantees that are expressed in terms of overlap metric, while our guarantees are expressed in terms of transport cost $c^2$, which leads to signal recovery even for noise $\sigma$ that does not tend to 0.
*Are there any negative information-theoretic results, i.e., lower bounds on the costs, for the Procrustes-Wasserstein Problem of the planted model?* and *the tightness of the results in terms of $\sigma$ is unknown.*
The tightness of our results is still an open question we are working on. Hence, we thank you for your inquiry; this is indeed an interesting point. We think that the IT results in the paper are sharp, at least in small dimensions, and not far from being sharp in high dimensions. We believe that whatever the value of $d$, when $\sigma \not \to 0$, one should be able to show that the optimization problem has many solutions, and that some of them are far from the ground truth, in the $c^2$ as well as the $\ell^2$ sense.
*How do the statistical guarantees for the proposed algorithm compare to the information-theoretic results? It would be nice if the authors could remark on this.*
The statistical guarantees for the Ping-pong algorithm (Section 3) are weaker than the information-theoretic results of Section 2.
While in Section 2 we are able to recover some signal (in the $\ell^2$ and $c^2$ sense) as long as $\sigma \not \to 0$, Proposition 1 requires $\sigma\to 0$ at a polynomial rate in $n$ to recover any signal for the Ping-pong algorithm.
However, experiments suggest that the Ping-pong algorithm still recovers some signal even for $\sigma=\Omega(1)$, suggesting that our analysis is suboptimal, leaving this question open for future works.
*Can there possibly be computational-statistical gaps for the problem?*
This is a very interesting question, that we did not investigate yet.
Comparing known computational results (Proposition 1 in our paper; the results from Gong and Li, 2024), there is still a gap between our informational results, and the cited computational results.
However, we believe that this gap is mostly due to suboptimality of the analyses.
There might be computational-statistical gaps, but investigating these would require using specific methods, for instance the low-degree methods (Hopkins, 2018, *Statistical Inference and the Sum of Squares Method*) that has proven to be efficient for planted recovery problems like the one we are interested in
---
Rebuttal Comment 1.1:
Title: Reply to rebuttal
Comment: I thank the authors for the detailed responses. I maintain my postive rating of the paper. | Summary: This submission is concerned with the problem of aligning planted graphs. To this end, both a permutation $\pi$ and an orthogonal matrix $Q$ must be estimated from observations of $X$ and $Y$. To evaluate the performance of this approach, it is proposed to measure to measure the error induced by $Q$ in terms of the squared Frobenius norm to the true underlying matrix $Q^{\star}$ (denoted $\ell^2(Q,Q^{\star})$) whereas the error induced by $\pi$ is given by $c^2(\pi,\pi^{\star})=\frac 1 n\sum_{i=1}^n||x_{\pi(i)}-x_{\pi^{\star}(i)}||^2$.
The first contribution of this work is to provide information-theoretic results for this problem. Namely, the simple maximum likelihood estimators for $\hat \pi$ (which can be efficiently computed) and $\hat Q$ (which has a closed form solution) recover $\pi^{\star}$ and $Q^{\star}$ almost exactly in the limit of vanishing noise in the high-dimensional regime ($d\geq 2\log n$). In the low-dimensional regime $d\ll \log n$, it is shown that if the noise is of the order $o(d^{-1/2})$, then estimators $\hat Q,\hat \pi$ can be formulated for which $c^2(
hat \pi,\pi^{\star})=o(d)$ and $\ell^2(\hat Q,Q^{\star})=o(d)$. It is shown, moreover that the good performance of one of the two estimators can be transferred to the estimator in a straightforward (and computationally tractable) manner.
Next, as the PW problem is equivalent to the quadratic assignment problem (QAP) which is known to be NP-hard in general, a convex relaxation of the QAP is considered which consists of maximizing the objective over all bistochastic matrices. The proposed algorithm for approximating the QAP consists of first solving the relaxed problem via the Frank-Wolfe algorithm then using this solution to hot start an alternating minimization procedure for the permutation matrix and the orthogonal matrix. Guarantees for one step of this algorithm are then provided. The paper concludes with some numerical experiments which shows that the proposed Ping-Pong algorithm generally outperforms direct resolution of the relaxed QAP and the method of Grave et al.
Strengths: The paper is well-written and places itself well within the existing literature on this problem.
To my understanding, the high dimensional results are first of their kind whereas the low dimensional results improve on previously known results. The proposed algorithm is also seen to perform best out of the other considered.
Weaknesses: The analysis of the proposed ping-pong algorithm is quite limited. The one-step result provided in Proposition 1 is already an interesting step in the analysis, but it would be interesting to see a more refined analysis. Given the fact that the QAP problem is NP hard it is clear that we cannot expect particularly strong results in general (e.g. convergence to global minimizer), but it may be possible to discern some properties of the matrices to which the algorithm converges or to provide a bound on the number of iterations required in certain cases.
At the very least it would be useful to recall the per iteration complexity.
Technical Quality: 3
Clarity: 4
Questions for Authors: The authors are kindly requested to answer to the above point.
I noted the following typos:
1. line 139: is its probability -> if its probability.
2. line 279: envelop -> envelope.
Confidence: 3
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: The assumptions are clearly stated in each relevant result.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the thorough reading and reviewing, positive assessment, and very positive feedback which will help improve the quality of the paper in its second version.
We answer the questions raised in the review below.
*The analysis of the proposed ping-pong algorithm is quite limited. The one-step result provided in Proposition 1 is already an interesting step in the analysis, but it would be interesting to see a more refined analysis.*
Our analysis of the Ping-pong algorithm is indeed limited to a 1-step version. As acknowledged in the paper, we did not manage to analyze the Ping-pong algorithm in a more detailed way for two reasons: *(i)* the initialization is a hard problem to study (as we argue at the beginning of Section 3.2, there are no guarantees for the relaxed QAP) and *(ii)* the alternative minimization steps eluded our analysis so far due to the non-convexity of the minimization steps.
However, it is to be noted that, unlike previous results for relaxed QAP (such as Valdivia and Tyagi, 2023), Proposition 1 offers recovery guarantees for non-null noise (even though the noise is required to be small enough).
Deepening our understanding of the Ping-pong algorithm is a challenging work in progress.
*Given the fact that the QAP problem is NP-hard it is clear that we cannot expect particularly strong results in general (e.g. convergence to global minimizer), but it may be possible to discern some properties of the matrices to which the algorithm converges or to provide a bound on the number of iterations required in certain cases. At the very least it would be useful to recall the per iteration complexity.*
We will recall the per iteration complexity of the Ping-pong algorithm in a revised version.
For the initialization, each of the $T$ steps of Frank-Wolfe algorithm is a LAP that has complexity $O(n^3)$ (which can be reduced to $O(n)$ via entropic regularization at the cost of approximations).
The SVD in the ‘Ping' step has complexity $O(d^3)$ while the LAP in the ‘Pong' step has complexity $O(n^3)$ (again, entropic regularization can reduce this to $O(n)$).
The overall complexity of the algorithm is thus $O(Tn^3+ K(n^3+d^3))$ (or $O((K+T)n+ Kd^3)$ if we use entropic regularization).
---
Rebuttal Comment 1.1:
Comment: I thank the authors for clarifying these points. I concur that a more in-depth study of the ping-pong algorithm is of a great interest, but is likely complicated and would be deserving of a separate paper. The addition of the overall complexity is useful in my opinion. | Summary: This paper studies the theoretical limits of the Procruste-Wasserstein problem which consists in finding an optimal assigment of cloud of points in Euclidean space up to a global rotation. The paper focuses on the model of a cloud of point perturbed by a global rotation and some gaussian noise. The authors establish theoretical results on the recovery of the optimal rotation and the optimal assigment in two cases of interest: high-dimension/low number of samples and low dimensions/high number of samples. They also propose a Frank-Wolfe type of algorithm with an initial step based on convex relaxation: this algorithm is shown to perform better than other state of the art algorithm.
Strengths: The paper is extremely well-written, it is easy to read. In my opinion, the questions addressed in the paper are of interest to the community interested in geometric alignment. The improvement upon Grave's algorithm seems interesting as well.
The estimator in the low-dimensional case introduced the idea of slicing along one-dimensional subspace and it is remarkable that this idea which was used in other contexts, works well in this low-dimensional problem.
Weaknesses: The theoretical results are interesting for a relatively narrow audience of NeurIPS.
Technical Quality: 3
Clarity: 4
Questions for Authors: Why is the conical alignment loss not used in practical experiments? Is it because it gives poor practical performances?
Confidence: 3
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: not applicable.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate the reviewer's positive feedback and the valuable comments.
We answer the question raised below.
*Why is the conical alignment loss not used in practical experiments? Is it because it gives poor practical performances?*
The conical alignment loss is useful for informational results only, and is not used in practical experiments because its computational complexity is very high. The reason is that one has to optimize over an $\varepsilon-$net of asymptotic size $(\sqrt{d}/\varepsilon)^{d^2}$, and each evalutation on this $\varepsilon-$net has complexity $O(pn)$ where $p \geq \mathrm{polylog}(1/\sigma, d)$. This yields a total complexity which is superexponential in $d$.
In other words, the statistical performances of the conical alignment loss are strong, while its computation efficiency is not. We do however believe that the conical alignment loss could be a path towards more efficient algorithms for this problem, that could seek at approximately minimizing this loss. | null | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Long-form factuality in large language models | Accept (poster) | Summary: This paper introduces LongFact, a dataset consisting of 2,280 questions across 38 topics, designed to assess the factuality of long-form answers generated by large language models. The authors also propose a new evaluation method called the Search-Augmented Factuality Evaluator (SAFE), which utilizes large language models and Google search to verify the accuracy of individual facts within long responses. Additionally, the paper introduces the F1@K metric, which is used to measure the precision and recall of factual accuracy in the model's responses.
Strengths: 1. The workload of the paper is substantial, including the construction of the dataset, evaluation methods, and assessment metrics.
2. The paper introduces the comprehensive LongFact dataset, which covers 38 topics, providing broad domain coverage that enhances the depth and breadth of the evaluation.
3. The SAFE method reduces the cost and time of manual evaluation.
Weaknesses: 1. The layout of this paper could be improved. Many significant pieces of information have been relegated to the appendix, which may result in key details and data being overlooked by readers at first glance. For instance, the SAEE method, one of the contributions of this paper, should be prominently discussed in the main body of the text. However, it is only briefly mentioned in passing, which does not justify its importance.
2. The paper frequently cites the FactScore method as "such as Min et al. (2023)" without specifying which aspects were used. The author should at least summarize the key elements adopted, rather than requiring readers to refer to the original paper for details.
3. Table 16 shows repeated queries for each fact, which contradicts the claim in the "Rating Individual Facts" section that including retrieved results in prompts prevents duplicates. The author needs to address this inconsistency.
4.
Technical Quality: 3
Clarity: 3
Questions for Authors: The author mentions that SAFE corrected 76% of the discrepancies with human annotations. Does this suggest that the annotation results from Min et al. (2023) are flawed? If so, does the reported 72% agreement rate with data annotated by Min et al. (2023) still hold significance?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The results in Table 2 should be differentiated, such as by bolding the best outcomes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your careful reading and thoughtful reviews. Let us address your comments below.
> The layout of this paper could be improved. Many significant pieces of information have been relegated to the appendix...
Thanks for the suggestion. We agree and will move some more information on the procedure of SAFE and its comparison with human annotators to Sections 3 and 4 of the main body.
> The paper frequently cites the FactScore method as "such as Min et al. (2023)" without specifying which aspects were used. The author should at least summarize the key elements adopted, rather than requiring readers to refer to the original paper for details.
Thanks for the feedback. We will clarify those aspects in the revision. The specific aspects we refer to in the FActScore paper (Min et al. 2023) are:
- Line 48, 136, 155, 704, 914, 1042, 1725: FActScore human annotations from crowdsourced human annotators
- Line 93, 829: the step of decomposing the long-form response into atomic facts in FActScore
- Line 621, 966: the FActScore metric (which is essentially average precision) in FActScore’s Section 3.1
- Line 713, 832, 848: the FActScore data on biography
- Line 789, 806: the FActScore label categories for atomic facts: "Supported", "Not-supported", and "Irrelevant" (FActScore Section 3.3)
> Table 16 shows repeated queries for each fact, which contradicts the claim in the "Rating Individual Facts" section that including retrieved results in prompts prevents duplicates. The author needs to address this inconsistency.
Thanks for catching this! Our prompt seeks to prevent duplicates but is indeed not infallible because the model may not fully follow these instructions. We’ve made this more clear by adjusting the language in the "Rating Individual Facts" section to "discourage the model from issuing duplicates". Of course, there are engineering methods (other than prompt tuning) that can prevent duplicates, including resampling when encountering duplicate questions and increasing temperature for query generation (the current temperature is 0.1, as noted in Footnote 18).
> The author mentions that SAFE corrected 76% of the discrepancies with human annotations. Does this suggest that the annotation results from Min et al. (2023) are flawed? If so, does the reported 72% agreement rate with data annotated by Min et al. (2023) still hold significance?
Thank you for this interesting question about the annotations from Min et al. (2023). We believe that:
- SAFE's 76% win rate on discrepancies doesn't mean crowd-sourced human annotations are flawed, because we manually checked a random subset of 50 individual facts (35 "Supported" and 15 "Not Supported") on which SAFE and human annotators agreed upon, and 48 (96%) were rated correctly.
- Since human annotations are not flawed, SAFE's 72% agreement rate holds significance, because the agreement rate was calculated on a large number of (16,011) individual facts (Figure 4 in Line 124-133), meaning that SAFE is as good as crowd-sourced human annotations on the agreements.
> The results in Table 2 should be differentiated, such as by bolding the best outcomes.
Thanks for the suggestion. We will make this change in the revision.
---
Rebuttal Comment 1.1:
Title: Further questions
Comment: Thanks for the author's response. I still have some questions.
1. When the author used SAFE for evaluation, they employed Google search. For each query, how many search results were used to conduct fact verification?
2. Although the author states that large language models do not always follow human instructions, looking at the sample tables 16-18 provided by the author, there are too many repeated queries. I'm somewhat concerned whether your query generation strategy is truly effective.
3. When the author used an LLM agent for fact annotation, they re-annotated the results where SAFE and human raters disagreed by using Google search. What was the purpose of this? Was it because the annotations of these inconsistent results were incorrect? The author denies this. Can you explain this?
---
Reply to Comment 1.1.1:
Title: Response to further questions
Comment: Thanks for the further questions. Please find our answers below.
> When the author used SAFE for evaluation, they employed Google search. For each query, how many search results were used to conduct fact verification?
When using SAFE for evaluation, we returned 3 search results per query from the Server API. We discussed this in Appendix C.1 Line 799 and ablated it in Appendix C.6 Line 881-896 and Figure 13. We are happy to make this clearer in the revised paper by adding this number into our main paper in the section first presenting SAFE (Section 3).
> Although the author states that large language models do not always follow human instructions, looking at the sample tables 16-18 provided by the author, there are too many repeated queries. I'm somewhat concerned whether your query generation strategy is truly effective.
First, there are many other atomic claims whose fact-checking queries don’t have duplicates. One example is that for the question "Who is Antoni Gaudí?", the fact-checking queries from SAFE for the claim "The Sagrada Família is perhaps Antoni Gaudí's most famous work" are:
- Is the Sagrada Família considered Antoni Gaudí's most famous work?
- What are some other notable works by Antoni Gaudí besides the Sagrada Família?
- Antoni Gaudí Sagrada Família famous work evidence
- Did Antoni Gaudí complete the Sagrada Família church before his death?
- Antoni Gaudí Sagrada Família completion date
Second, there are indeed some claims for which there are repeated fact-checking queries, like the ones we showed in Table 16 and 18. Our insights are twofold:
- On the one hand, prompting models with better instruction-following capabilities and rewording the prompt to emphasize diversity may reduce the duplication of queries.
- On the other hand, some claims are easier to be verified by fewer than 5 queries, in which case the model or human may not be able to or even need to come up with that many different queries. In this case, the model may use duplicated queries since the SAFE pipeline forces the model to use exactly 5 queries. Empirically, we see that claims that only contain **objective** words can be fact-checked more easily with less diverse queries, whereas claims with **subjective** judgements (like "famous" in the above example and "significant" in main text Figure 3, between Line 105 and 106) need more diverse queries to collect evidence from multiple angles for the LLM to finally make a judgment.
Inspired by the above observations, an idea to improve SAFE is to add an option for the LLM to stop generating more queries when it determines that the collected evidence is enough to judge whether the claim is supported. Right now we instruct the LLM to issue 5 search queries using the instruction "To do this, you are allowed to issue ONE Google Search query that you think will allow you to find additional useful evidence." We do this regardless of whether the already-collected evidence is enough to make the final annotation. The change should reduce cost and focus resources on more-difficult claims.
Thanks again for bringing this up and inspiring us to further improve our method. We will also make the above clarifications in our manuscript.
> When the author used an LLM agent for fact annotation, they re-annotated the results where SAFE and human raters disagreed by using Google search...
In Section 4, we indeed manually-annotated facts where SAFE and human raters from Min et al. 2023 disagreed. We did this because we wanted to answer the question of "when SAFE and human raters disagree, is SAFE usually correct (and therefore human raters were incorrect) or are the human raters usually correct (and therefore SAFE was incorrect)?" We believe that answering this question helps us understand how SAFE performs as an automated factuality annotator relative to crowdsourced humans.
In this experiment, we assumed that our annotations ("researcher + full-internet access") were ground-truth labels, since we believe that these expert-level annotations with more available resources and (presumably) more carefulness are more representative of true factuality relative to crowdsourced humans. As stated in our previous response, however, this does not mean that **all** of the results from Min et al. 2023 are in question; we only note that when SAFE and the annotations from Min et al. 2023 disagree (which could indicate that a fact is difficult to verify), SAFE is often correct. Indeed, the crowdsourced annotations from Min et al. 2023 are only "incorrect" relative to our definition of factuality for these disagreement cases. Since there is still a 76% agreement rate and we saw there are extremely few facts (2/50 in our sampled batch, or 4%) where both crowdsource human annotators and SAFE would return incorrect annotations, we posit that almost all of these 76% of the 16k facts were indeed correctly annotated.
Please let us know if we can clarify this further. Thanks again for the inspiring questions. | Summary: In this work, the authors investigate the evaluation of LLMs’ factuality in long-form generation. Specifically, they first introduce LongFact, a multi-topic benchmark for assessing long-form factuality. In LongFact, they use GPT-4 to generate questions about specific concepts or objects from given topics. Additionally, they design a search-augmented factuality evaluator (SAFE), which evaluates the LM’s output by breaking down the generation into individual claims and then using the Google Search API to verify the correctness of these claims. They also conduct experiments with popular LLMs and find that larger models exhibit better factuality in long-form generation.
Strengths: Originality: The authors propose a new benchmark for evaluating long-form generation. Unlike previous studies, LongFact encompasses multiple topics. The SAFE evaluation pipeline introduces the use of the Google Search API to estimate the correctness of claims, a novel approach not previously employed.
Quality: This paper explores both the effectiveness and the cost of evaluation compared to human annotators, providing strong support for the value of the proposed methods. The error analysis offers valuable insights into their method and helps the reader better understand the evaluation pipeline.
Clarity: The paper is well-structured and easy to follow, with detailed explanations provided.
Significance: This paper tackles the challenging task of evaluating LLM factuality in long-form generation. The datasets are released, and the results are easily reproducible, benefiting the research community.
Weaknesses: 1. The overall concept is similar to previous work that decomposes long-form responses into claims, such as FactScore, which diminishes the novelty. Although this work covers more topics and uses Google instead of Wikipedia in the evaluation pipeline, the similarity remains.
2. The claim that “LLM agents can be better factuality annotators than humans” seems somewhat exaggerated. The human annotators do not have internet access and could only label based on the corresponding Wikipedia page, which is not a practical setup.
Technical Quality: 4
Clarity: 4
Questions for Authors: 1. I notice that current benchmarks for long-form factuality, including LongFact and others, typically consist of questions that ask for information or explanations about a subject, such as individuals or concepts (typically extracted from Wikipedia). The dependencies between sentences in the LM’s responses in these benchmarks are generally simple. Can SAFE handle responses where more complex inter-sentence dependencies exist? Consider the following question from ELI5 as an example:
Query Title: What were different decades in the 1800s like?
Explanation: Every decade of the 20th century can be loosely quantified in a general theme (the roaring 20s, the flower power 60s, the weird clothing styles/music of the 80s) and each one is so different from the last. What kinds of themes were common between decades in the 19th century, and were they as distinctively different from each other as those in the 20th century?
Candidate Answer: In the early 1800´s there was a radical change of fashion and lifestyle for men. The high breeches, stockings and tricorne hats one associates with the era of the French and American revolutions disappeared with the genesis of modern fashion which emphasised modesty in clothing, eloquence in speech and the most refined manners and hygiene. This was the birth of the "dandy" culture which still thrives and is very much predominant today.
In this candidate answer, the claim “This was the birth of the ‘dandy’ culture” is supported by all the previous claims. Could SAFE handle this?
2. It seems that the idea of constructing LongFact and the quality estimation method SAFE could be used to produce synthetic data for enhancing LM’s factuality. Do the authors plan to explore this direction?
Confidence: 3
Soundness: 4
Presentation: 4
Contribution: 4
Limitations: The authors effectively summarize the limitations of this work. Most of these limitations are not trivial to address, and I agree with the authors that the current design represents a good trade-off between cost and performance.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your careful reading and thoughtful reviews. Let us address your comments below.
> The overall concept is similar to previous work that decomposes long-form responses into claims, such as FactScore, which diminishes the novelty.
We agree that our evaluation method SAFE shares similarity with FActScore decomposing a long-form response into facts and checking precision based on external knowledge sources. Our work, however, still makes these contributions:
- We propose LongFact to evaluate long-form factuality in open domains. To our knowledge, this is the first multi-topic dataset for open-domain factuality evaluation.
- SAFE with multi-step reasoning: Figure 3 in Page 4 shows that SAFE carefully reasons through different aspects of the fact to arrive at the final conclusion.
- We propose F1@K as the long-form factuality metric that takes into account both precision and recall. To our knowledge, this is the first long-form factuality metric that can account for recall of model responses.
- We evaluate long-form factuality using the LongFact benchmark and the F1@K metric, allowing us to gain more insights into the impacts of scaling and RLHF on the long-form factuality of models (Appendix E.2 and E.4, Line 1047-1070 and Line 1079-1103).
> The claim that "LLM agents can be better factuality annotators than humans" seems somewhat exaggerated. The human annotators do not have internet access and could only label based on the corresponding Wikipedia page, which is not a practical setup.
We agree that the capabilities of human annotators in FActScore are limited when only having access to the provided Wikipedia page. In Appendix A.4 Figure 8 (Line 587-599), we show 22 out of 81 incorrect ratings were because of missing information in Wikipedia pages. On one hand, this shows one advantage of using Google Search rather than a predetermined knowledge source, which is one advantage of SAFE we want to highlight; on the other hand, human annotators more often fail because of other reasons shown in the pie chart — mostly reasoning issues or carelessness. Even if we rule out the 22 incorrect ratings because of missing information, human annotators still give incorrect ratings in 59% of the 100 disagreements, much higher than SAFE’s 24% (Appendix A.3 Figure 7, Line 552-572).
> I notice that current benchmarks for long-form factuality, including LongFact and others, typically consist of questions that ask for information or explanations about a subject, such as individuals or concepts (typically extracted from Wikipedia). The dependencies between sentences in the LM’s responses in these benchmarks are generally simple. Can SAFE handle responses where more complex inter-sentence dependencies exist? Consider the following question from ELI5 as an example:
Thanks for noting this and providing a detailed example. We agree that some responses may be harder to decompose into atomic facts than others. We ran SAFE with gpt-3.5-turbo and Serper API, and saw the sentence "This was the birth of the "dandy" culture which still thrives and is very much predominant today." broken down into three self-contained atomic facts:
- The radical change of fashion and lifestyle for men in the early 1800s was the birth of the "dandy" culture.
- The culture emphasizing modesty in clothing, eloquence in speech, refined manners, and hygiene still thrives.
- The "dandy" culture that originated in the early 1800s is predominant today.
The three sentences adequately contain the main claims of the sentence and are self-contained to include relevant information in the previous claims. These claims are all rated as "Supported" by SAFE.
> It seems that the idea of constructing LongFact and the quality estimation method SAFE could be used to produce synthetic data for enhancing LM’s factuality. Do the authors plan to explore this direction?
Thanks for the proposal. We agree that synthetic data generation for factuality improvement is an exciting direction for research and production. Because we still see a few failure cases of agents like SAFE (Appendix A.3, starting from Line 551), we believe it is worth further improving the agents’ quality to generate synthetic data with higher quality for to improve autoraters and models, and thus we leave it to future research.
---
Rebuttal Comment 1.1:
Comment: Thanks for the detailed response. I will keep my positive score. | Summary: This paper focused on the open-domain long-form factuality problems of Large Language models. It proposes a benchmark called LongFact, which consists of more than 2k prompts across 38 domains, it also proposes an agent-based factuality detection system called SAFE with a new metric to measure the factuality: F1@K. Experiments show that SAFE has satisfactory factual error detection ability which surpasses crowd-source annotators.
Strengths: This paper addresses a critical problem and offers substantial contributions. I believe the released benchmark alongside the SAFE system could be practically useful in real-world applications.
Weaknesses: It is foreseeable that the performance of SAFE could surpass that of crowd-sourced annotators, given that crowd-sourced annotations often lack high quality. I think it would be better to compare the SAFE with expert annotators.
Technical Quality: 4
Clarity: 4
Questions for Authors: Refer to the weakness.
Confidence: 4
Soundness: 4
Presentation: 4
Contribution: 3
Limitations: Refer to the weakness.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your careful reading and thoughtful reviews. Let us address your comments below.
> It is foreseeable that the performance of SAFE could surpass that of crowd-sourced annotators, given that crowd-sourced annotations often lack high quality. I think it would be better to compare the SAFE with expert annotators.
We agree that crowd-sourced annotations often lack high quality, which is why we used our researcher annotations as a proxy of expert-level human annotations for the 100 disagreements between crowd-sourced annotations and SAFE in Section 4 (starting from Line 123). Appendix A.3 (starting from Line 551, especially Figure 7) shows SAFE’s causes of failure, which essentially shows in which cases and how often SAFE is inferior to expert humans. For our purposes, we believe that SAFE outperforming crowd-sourced humans indicates that SAFE can serve as a strong autorater for factuality. We believe that improving the quality of SAFE past expert-level manual annotation is a challenging yet important area for future exploration. | Summary: This paper introduces a novel benchmark, LongFact, for evaluating the factual accuracy of long-form responses generated by large language models (LLMs). It proposes a method named SAFE (Search-Augmented Factuality Evaluator) to automatically assess the factuality of these responses. SAFE uses an LLM to decompose a long-form response into individual facts and checks each fact's accuracy through Google Search. The authors introduce F1@K, a metric that balances the precision and recall of supported facts to quantify long-form factuality. Empirical results show that SAFE outperforms human annotators in terms of cost and accuracy. The study benchmarks thirteen language models across four model families, finding that larger models generally produce more factual responses.
Strengths: - Innovative Benchmark: The creation of LongFact, a comprehensive benchmark specifically designed for long-form factuality, addresses a significant gap in existing evaluation methods that primarily focus on short-form responses.
- Automated Evaluation Method: The development of SAFE, which leverages LLMs and Google Search for fact-checking, provides a scalable and cost-effective alternative to human annotations, significantly reducing evaluation costs while maintaining high accuracy.
- New Metric: The introduction of F1@K as a metric to balance precision and recall in evaluating long-form factuality is a notable contribution, offering a more nuanced measure of a model's performance.
Weaknesses: - Reliance on Google Search: The dependence on Google Search as the primary source for fact-checking may introduce biases and limitations, The retrieved information may also contain inaccuracies or hallucinations, especially for topics that are not well-covered or are subject to misinformation online.
- Limited Comparison of LLMs: The paper would benefit from a more comprehensive comparison of human agreement scores using different LLMs within the SAFE framework, including open-source models like LLaMA2. This would provide a clearer evaluation of the factuality assessment capabilities of various LLMs.
- Insufficient Ablation Studies: Conducting more ablation studies would strengthen the paper, such as comparing human agreement scores using different search engines or searching across various sources. This would help understand how different fact-checking methods impact the accuracy and reliability of the evaluations.
- Generalization to Other Domains: The paper focuses primarily on open-domain factuality. It remains unclear how well the proposed methods generalize to specialized domains such as medicine or law, where factual accuracy is critical and more nuanced.
Technical Quality: 3
Clarity: 3
Questions for Authors: Could you please provide more details of re-annotation on the randomly-sampled subset of 100 individual facts?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Yes
Flag For Ethics Review: ['Ethics review needed: Research involving human subjects']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your careful reading and thoughtful reviews. Let us address your comments below. We hope these results will help clarify our work, and we will update our manuscript to include them.
> Reliance on Google Search
Google Search indeed may have limitations as you mentioned (Line 281-290). We chose it because it allows for coverage of a broad range of topics without high cost, and probably that’s the best automated way to estimate factuality of a claim. Predetermined sources such as Wikipedia may not contain all relevant information needed for open-domain questions (Line 607). SAFE can also be easily tailored to searching over more restricted and predetermined sources by adding "site:xxx" to the query or replacing Serper with another API that queries professional databases.
> Limited Comparison of LLMs
We’ve run the following experiment.
On a random subset of 20 questions within the FActScore biography dataset, we compare Mixtral 8x7B with gpt-3.5-turbo for SAFE. We manually examine the 196 differently-rated claims among all 658. Mixtral wins 1/3 while gpt-3.5-turbo wins 2/3. Both LLMs occasionally fail to find the right result when interacting with search. The main reason Mixtral falls short is its weaker ability in properly revising a claim to be self-contained. For example, for the question "Tell me a bio of Jonathan Tucker", Mixtral revised the claim "Justified aired in 2015" to "The television series that Jonathan Moss Tucker appeared in aired in 2015", thus losing its exact subject. This seems to suggest Mixtral may have weaker instruction-following capabilities, which could be improved by prompt tuning or further fine-tuning.
> Insufficient Ablation Studies
We agree the search engine is a crucial part of SAFE and should be ablated. Because we do not have easy access to other engines or databases, we ablate the search scope between no-scope (current setup with no restrictions) and Wikipedia-only (adding postamble "site:https://en.wikipedia.org" to queries). On a random subset of 20 questions from FActScore biography, we manually examine the 146 differently-rated claims among all 657. We found the following two cases where the difference is attributed to search scope:
1. When no-scope wins: No-scope finds the right information on the open web when Wikipedia doesn’t contain such information.
2. When Wikipedia-only wins: The no-scope query returns irrelevant information on the open web and causes the rater to rate the claim as "Not Supported", while the Wikipedia-only query finds the right information.
Case 1 appears three times more often than 2, suggesting searching over the open web is more helpful in providing LLM with the right information. The advantage should be higher for open-domain questions for which Wikipedia contains even less information. Meanwhile, Case 2 indicates that the interaction between LLM and search engines should be further improved in future research to more accurately find the right information when it exists.
> Generalization to Other Domains
Our benchmarking results on a set of 250 randomly-sampled LongFact-Object prompts already contain law and medicine (Appendix E.6.18, E.6.19, E.6.25). Nevertheless, we conduct an additional study on 6 law and medicine questions:
- What does the EB-5 Immigrant Investor Program under US Immigration Law entail?
- Can you provide some information about the UN Convention on the Law of the Sea?
- What is the purpose of the I-765 Application for Employment Authorization in US Immigration Law?
- What is the Orphan Drug Act?
- What is the role of the protein Fibroblast Growth Factor 21 in regulating metabolism?
- What are the primary functions of the neuromodulatory system in the human brainstem?
We prompt gpt-3.5-turbo for responses, and use SAFE with gpt-3.5-turbo and Serper to rate claims. We then manually examine the correctness of each rating.
Overall, SAFE gives the same rating as us (researcher + full internet access) on 79.7% (106/133) law claims and 94.9% (130/137) medicine claims. We find in more nuanced domains, it is more challenging for SAFE to (1) break down a long sentence into claims and (2) revise a claim to be self-contained, than to rate whether a claim is supported by search results: over 90% of the incorrect ratings fall into the former two categories.
We find SAFE is still able to reason from search to judge whether a claim is supported. A success example is that SAFE rates "FGF21 is produced in response to metabolic stresses" as "Supported" after reasoning from search result "Increased FGF21 expression in Non-Alcoholic Fatty Liver Disease might be a response to metabolic stress...". There are also failures like:
- Not breaking down the sentence into a proper claim: One of the claims the LLM extracts from the sentence "For example, if a refugee named Maria was granted asylum in the U.S. on January 1, 2021…" is "Maria is a refugee".
- Revised claim not self-contained: One of the claims the LLM extracts from the sentence "FGF21 levels are elevated in conditions such as obesity, type 2 diabetes, and fatty liver disease" is "FGF21 levels are elevated".
Therefore, for more nuanced domains, SAFE can be improved by using LLMs with stronger reasoning capabilities to better generate self-contained claims, not necessarily to better interact with tools like search engines.
> Could you please provide more details of re-annotation on the randomly-sampled subset of 100 individual facts?
Thanks for allowing us to clarify our annotation process. We used our own "researcher + full internet access" annotations to generate ground-truth labels (Appendix A.10) and inspected the annotations of both FActScore and SAFE as well as the steps within SAFE (Appendix A.3, A.4). The researcher annotations (ground-truth label) were derived from manually searching on the full Internet, including but not limited to Wikipedia. Please let us know if we can clarify any other details about this. | null | NeurIPS_2024_submissions_huggingface | 2,024 | Summary: This work proposes a 4-step pipeline for automatic evaluation of long-form answers using LLMs. Given a long-form answer, their pipeline involves (1) splitting the question into individual facts, (2) decontextualizing each fact, (3) determine each facts relevance to the original question, and (4) verifying each fact given retrieval results using Google search.
In addition to proposing this pipeline, the authors also develop a set of 2,280 total information seeking prompts (automatically generated w/ GPT-4) for evaluation. The authors then demonstrate the efficacy of their evaluation pipeline by comparing their pipeline's judgements against human-annotators and find high (~75%) agreement. Furthermore, the authors find that, in these disagreements, their pipelines judgments were usually correct and the annotator judgments were flawed (also ~75% of the time).
Finally, using their pipeline, the authors propose a metric for evaluating the quality of a long-form response that is based on F1 over individual facts in the generated answer. Instead of relying on a gold set of individual facts for each question, their metric evaluated recall using an "ideal number of individual facts in the answer for a given question" which is set as a hyperparameter of their evaluation metric.
Strengths: This work address an important open problem in LLM evaluation, automatic evaluation of long-form answers. The proposed evaluation pipeline, is accurate (as demonstrated through human evaluation) and cheap (20x cheaper than relying on humans). The proposed dataset as well, will be a great resource for future work.
While the proposed F1 evaluation metric may not comprehensively evaluate all aspects of individual fact recall in long-form QA, it still represents a suitable and scalable alternative. See Weaknesses below for discussion.
Weaknesses: One concern is with the "ideal number of individual facts in the answer for a given question" hyperparemeter. While this does to be a suitable alternative to relying on a gold set of facts, such a value is highly question and user dependent. While the results demonstrate that performance rankings are robust to this selection in hyperparameter, there is some concern about whether models that have been trained to produce more concise answers versus more verbose answers.
Intuitively, it seems that fact relevance (not just the binary prediction, but some more granular score) is also an important consideration when evaluating recall. Recall over a wide range of correct, somewhat relevant facts may be less desirable than recall over a small number of highly relevant facts.
Technical Quality: 3
Clarity: 4
Questions for Authors: Regarding weaknesses noted above, I'm curious whether, if this evaluation / metric / dataset is released as a benchmark or leaderboard, if it could be gamified by systems that are trained or prompted to produce a specific number of relevant facts (matching K in the evaluation metrics)? It seems like something along the lines of the precision vs # of facts recalled curve in the appendix seems to be a more suitable way to evaluate models, testing their ability to generate responses with high F1 over different long-form answer lengths.
Confidence: 4
Soundness: 3
Presentation: 4
Contribution: 4
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your careful reading and thoughtful reviews. Let us address your comments below.
> One concern is with the "ideal number of individual facts in the answer for a given question" hyperparemeter...
We agree that the desired number of supported facts K is user-dependent, and the optimal value may vary across models and questions. Our recall metric favors longer responses because of our focus on long-form factuality:
- Modern LLMs are often trained to produce more-verbose responses because of the reward signals that come from automated or human evaluations [1, 2, 3, 4]. This may be because humans often favor longer responses because these responses provide richer information, especially for open-ended questions (such as the ones in LongFact).
- The selection of K may look arbitrary, but as you mentioned, the performance rankings are mostly robust to the value of K, and the tunability of K provides researchers the flexibility of deciding how many facts they want the response to include.
References:
[1] Yann Dubois, Balázs Galambosi, Percy Liang, Tatsunori B. Hashimoto. Length-Controlled AlpacaEval: A Simple Way to Debias Automatic Evaluators. 2024.
[2] Ryan Park, Rafael Rafailov, Stefano Ermon, Chelsea Finn. Disentangling Length from Quality in Direct Preference Optimization. 2024.
[3] Weizhe Yuan, Ilia Kulikov, Ping Yu, Kyunghyun Cho, Sainbayar Sukhbaatar, Jason Weston, Jing Xu. Following Length Constraints in Instructions. 2024.
[4] Tianhao Wu, Weizhe Yuan, Olga Golovneva, Jing Xu, Yuandong Tian, Jiantao Jiao, Jason Weston, Sainbayar Sukhbaatar. Meta-Rewarding Language Models: Self-Improving Alignment with LLM-as-a-Meta-Judge. 2024.
> Intuitively, it seems that fact relevance (not just the binary prediction, but some more granular score) is also an important consideration when evaluating recall...
Thanks for the insight, we agree that rather than evaluating recall based on the number of (somewhat) relevant facts within an LLM’s response over an expected number of facts K, it would be ideal to evaluate recall based on the relevance between facts in an LLM’s response and a golden ground-truth set of facts that covers the most important aspects. That said, we consider our recall metric is valid because:
- The relevance of facts in an LLM’s response can be seen as a function of instruction-following capabilities [5], so we follow previous works [6, 7] and purely focus on the factuality of claims that are at least somewhat relevant.
- Why there isn't a core set of highly-relevant facts as ground truth: We presented the LongFact benchmark without a core set of highly-relevant facts because this set would be highly subjective and therefore not straightforward to create. It would be difficult to objectively decide which facts are more important than others for open-domain questions. For this reason, we instead presented "recall up to K facts" as a surrogate metric for recall. We agree, however, that designating such core sets of facts for LongFact prompts could be a valuable direction for future work.
References:
[5] Zhiyuan Zeng, Jiatong Yu, Tianyu Gao, Yu Meng, Tanya Goyal, Danqi Chen. Evaluating Large Language Models at Evaluating Instruction Following. ICLR 2024.
[6] Sewon Min, Kalpesh Krishna, Xinxi Lyu, Mike Lewis, Wen-tau Yih, Pang Wei Koh, Mohit Iyyer, Luke Zettlemoyer, Hannaneh Hajishirzi. FActScore: Fine-grained Atomic Evaluation of Factual Precision in Long Form Text Generation. EMNLP 2023.
[7] Katherine Tian, Eric Mitchell, Huaxiu Yao, Christopher D. Manning, Chelsea Finn. Fine-tuning Language Models for Factuality. ICLR 2024.
> Regarding weaknesses noted above, I'm curious whether, if this evaluation / metric / dataset is released as a benchmark or leaderboard, if it could be gamified by systems that are trained or prompted to produce a specific number of relevant facts (matching K in the evaluation metrics)?
Thanks for noting this. Indeed, in Lines 291-297 we discussed how our metric and dataset could be gamified by simply outputting repeated factual claims. Our assumption is that LLM outputs don’t contain repeated facts, which can be satisfied by (1) prompting reasonably-trained LLMs whose outputs are rarely repetitive, and/or (2) adding a unified deduplication step to remove duplicate facts. We thus view this as an engineering problem that can be solved by future research. Additionally, even if one provides K relevant facts to achieve maximum recall, these facts must also be factual to achieve maximum precision that would aggregate into a full score of 100% for F1@K.
> ...the precision vs # of facts recalled curve in the appendix seems to be a more suitable way...
The focuses of our F1@K metric and the precision vs # facts curves in Figure 14 are slightly different. The F1@K metric essentially asks "is the model able to provide K relevant and factual claims about the given topic." On the other hand, the curves ask "if the model had to give K facts, how many of them would be factual." The curves may provide richer information on the long-form factuality of models, but they are harder to measure, because:
- It is difficult to control the number of facts in the response. As explained in Appendix D.4 - Footnote 25, the model may have different understandings on what constitutes an individual fact, so we only ask for a given number of sentences. Also, as explained in Footnote 26, the model needs to have a strong instruction-following capability to be able to precisely respond at a given length. Even so, the length of the response is only a proxy for the number of facts that it should contain.
- Obtaining the data to create these curves requires significantly more time and resources since it requires sampling and evaluating multiple responses per question. It is unclear whether this additional computational cost would be desirable.
Because of these reasons, we proposed F1@K in order to use a single number as a condensed metric for long-form factuality. | null | null | null | null | null | null |
Strategic Littlestone Dimension: Improved Bounds on Online Strategic Classification | Accept (poster) | Summary: This paper studies online strategic classification from a combinatorial perspective. The paper defines a new combinatorial dimension called the Strategic Littlestone dimension that jointly captures the complexity of the hypothesis class and the manipulation graph. They show that the Strategic Littlestone dimension exactly quantifies the minimax number of mistakes made by deterministic learners in the realize setting. The paper also provides improved upper bounds in the agnostic setting by modifying the classic agnostic-to-realizable reduction to handle the fact that the learner does not observe true features. Finally, the paper considers the case where the manipulation graph is not known to the learner but belongs to a family of graphs which is known to the learner. They provide bounds on the minimax value in both the realizable and agnostic settings in this case.
Strengths: - The paper is well-written and easy to follow
- The problem setting is well-motivated
- The technical contributions are novel and improve upon existing results. In particular, I found the proof of the lowerbound in terms of the strategic Littlestone dimension to be nice.
Weaknesses: - Lack of lower bounds. Apart from the known manipulation graph, realizable setting (Theorem 3.2), lower bounds on the minimax regret in terms of the strategic Littlestone dimension are not provided.
- Lack of results for randomized learners in the realizable setting. While the authors do discuss randomized learnability, and its difficulty in the discussion section, I find it a bit unsatisfying. Without lower bounds for randomized learners, it is not clear whether the strategic Littlestone dimension characterizes realizable strategic online learnability in full generality. Meaning, as far as I can tell, there could be a separation between deterministic and randomized realizable learnability for strategic online classification (correct me if I'm wrong here).
- Deterministic learners in the agnostic setting. It is well known that in order to achieve sub linear regret in the traditional online classification setting, one needs randomization in the agnostic setting. Thus, it is a bit strange to me that the authors study/construct deterministic learners for strategic online classification in the agnostic setting. This also leaves open the question of whether the strategic Littlestone dimension characterizes agnostic online learnability.
- The two points above bring into question the utility/usefulness of the strategic Littlestone dimension. Compared to say the Littlestone dimension, which not only qualitatively characterizes online learnability in the realizable and agnostic settings but also exactly quantifies the minimax rates, the same (at least as of right now), cannot be said of the strategic Littlestone dimension. As far as I can tell, the strategic Littletone dimension only characterizes deterministic realizable learnability.
- Lack of intuition behind the dimension. In traditional online classification, the intuition behind needing shattered Littlestone trees is very easy from a lower bound perspective - we just need to tell the adversary what to play for every move of the learner. However, this sort of logic does not seem to work for strategic Littlestone trees due to the need for reverse engineering. Unfortunately, I did not fully understand the intuition behind the strategic Littlestone tree from the proof sketch of the lower bound in the main text. I had to read the full proof in the Appendix to fully understanding the structure of the strategic Littlestone tree. I think the paper can benefit from a more detailed discussion about the differences between shattered Littlestone trees and shattered strategic Littlestone trees and how the adversary should use strategic Littlestone trees to construct hard streams. In addition, I think to improve intuition, it would be nice to buff up the proof sketch of Theorem 3.2, and even better to include the full proof in the main text.
- Minor: there is a typo in the second bullet point in Def. 3.1. I think it should be $N_G^{-}$.
Technical Quality: 3
Clarity: 3
Questions for Authors: - In the case of unknown manipulations graphs, is the finiteness of the graph class necessary? Is there a joint dimension of the graph class and the hypothesis class that captures learnability and the minimax rates in this setting?
- Your definition of realizability states that if $(x_1, y_1), ..., (x_T, y_T)$ is the sequence of agents chosen by the adversary, there is a hypothesis $h \in H$ such that $h(br_{G, h}(x_t)) = y_t$. That is, the label $y_t$ agrees with the value of $h$ on the agents best-response to $x_t$ w.r.t $h$. Is this the most natural definition of realizability? What about simply requiring that $h(x_t) = y_t$ for all $t \in [T]$. Then, perhaps the fact that the learner observes $v_t$ instead of $x_t$ but still needs to correctly predict $y_t$ can be viewed as the learner observing noisy features? It would be nice if you can comment about why you choose this particular notion of regret and realizability.
- Can you comment about any known separations between deterministic and randomized learnability for strategic online classification in the realizable setting?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: No limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the detailed and insightful comments.
> Deterministic learners in the agnostic setting
Our focus on deterministic algorithms is motivated by real-world applications of strategic classification where legal or regulatory requirements often demand the learner to use deterministic algorithms. For example, in college admissions, institutions are required or expected to publish clear and transparent decision criteria, as randomization could be seen as arbitrary or unfair, weakening the trustworthiness of the admissions process.
At the technical level, our construction of representative experts (Lemma 4.3) relies on a careful coupling analysis that applies to any trajectory of classifiers chosen by the learner, regardless of whether randomness is used or not. However, for the strategic variant of the learning from expert advice algorithm, the only known agnostic algorithm (from [Ahmadi et al., EC’23]) is also deterministic, which accounts for the deterministic nature of our agnostic algorithm. Therefore, to extend this technique to randomized algorithms, one key challenge is to design randomized agnostic algorithms in the finite expert setting with at most logarithmic dependence on the size of the experts. We believe this is an interesting and non-trivial question for future research.
> Finiteness of the graph class in the unknown manipulation graph setting, and joint dimension of the graph class and the hypothesis class
Thank you for the insightful question. While the finiteness of the graph class is necessary in the worst case scenario (see [Prop 14, Cohen et al, 2024] for a lower bound in terms of $\log|\mathcal{G}|$), it is not necessary for every instance. We agree that introducing a joint complexity measure for the graph class and the hypothesis class to characterize minmax rates is a valuable open question. Some technical challenges we encountered in extending our single-graph SLDim to the multi-graph setting include:
- In the structure of the strategic Littlestone tree (see Lines 247-248), the set of outgoing edges from a node depends on its out-neighborhood under the manipulation graph. When dealing with a family of graphs, one might attempt to use the union of the out-neighborhood for each graph in the class. However, that causes the tree to be too wide and shallow, resulting in a dimension that is smaller than the actual minmax mistake bound. In particular, it cannot become a valid upper bound because the branches no longer reflect the true types of mistakes that the learner can possibly make.
- If one keeps track of a “graph version space” and to determine the structure of the Littlestone tree, there may be a monotonicity issue. As graphs are eliminated from the graph version space, the set of feasible manipulations shrinks, which may potentially cause the tree to become thinner and deeper, thus increasing the dimension of the subgraph! This is in contrast to the monotonicity in terms of the hypothesis class which is less subtle and almost immediate from definition.
> Definition of realizability and regret
Thanks for the question. We believe this is an important point and we will add a remark to future versions of our paper. Unlike the traditional notion of realizability that simply assumes $h(x_t)=y_t$ some $h\in H$ across all $t\in[T]$, we use the strategic notion of realizability that requires $h(BR_{G,h}(x_t))=y_t$. In other words, we require that a hypothesis $h\in H$ classifies each agent correctly if the agents also best respond to $h$. We adopt this notion for several reasons:
- This notion is rooted in the concepts of Stackelberg value and Stackelberg regret in game theory, which account for agents’ best responses and serve as a natural benchmark in strategic settings.
- It places the learner and the optimal hypothesis on equal footing when evaluating the number of mistakes they make, unlike the standard notion of realizability which implicitly assumes that agents manipulate against the learner but not against the optimal hypothesis.
- Under this strategic notion of realizability, there always exists a learning algorithm in hindsight that can achieve zero mistakes against a realizable sequence, such as when the algorithm implements the optimal hypothesis throughout. This aligns with the purpose of introducing realizability in the first place. In contrast, it is unclear that any algorithm would be able to achieve zero mistakes against a standard realizable sequence. Previous work has shown strong incompatibility between these two notions in the context of linear strategic classification setting [Chen, Liu & Podimata, NeurIPS’20].
> Separations between deterministic and randomized learnability
Yes, we showed a separation between deterministic and randomized mistake bounds in Appendix E. Specifically, we constructed a family of instances where for each value of $\Delta$, there exists an instance where the minmax bound for deterministic algorithms is at least $\Delta-1$, but the minmax bound for randomized algorithms is at most $\log\Delta$. This class witnesses a super-constant gap between deterministic and randomized bounds, unlike their non-strategic counterparts which only differ by a factor of 2. This implies that the proposed SLDim does not characterize randomized learnability and that characterizing learnability in the randomized setting is highly nontrivial.
---
Rebuttal Comment 1.1:
Comment: Thanks for your response. | Summary: This paper studies an adversarial online setting where the agents can manipulate their feature vector $x_t$ to some feature vector $x'_t$ given a graph of manipulation rules $G$. The learner observes only $x'_t$ and knows in advance $G$.
The usual goal is to obtain sublinear regret, with respect to some hypothesis class. In the realizable setting, this is equivalent to minimizing the number of mistakes, also known as the "mistake bound model".
When $G$ is "empty", meaning there are no edges, we get the standard online model.
The main contributions are as follows:
In the realizable setting, the authors define a Littlestone tree that incorporates the manipulations and defines a notion of the Littlestone dimension for these trees. Extending to the agnostic setting, the authors use a known technique for constructing a finite "online cover" of the hypothesis class, and then apply an algorithm for the finite case (version of multiplicative weights update).
Moreover, an extended model is introduced, where there is a finite set of manipulation graphs from which a graph is chosen. When all agents use the same graph from this set this is called the realizable case otherwise it is an agnostic setting.
Strengths: The results in this paper improve upon previous papers that study this question.
The techniques are clear, and the proofs are correct (as far as I checked).
Weaknesses: My main concern about the paper is the lack of technical novelty, ideas from standard online learning translate pretty smoothly to the strategic online learning setting.
The definition of the Littlestone tree is straightforward, taking the "strategic loss" into account. Indeed, the upper and lower bound proofs are very similar to those of the standard Littlestone dimension and the SOA algorithm.
The ideas in the agnostic setting are also quite standard: the agnostic online learning technique of constructing a finite "online cover" for the class (by Ben-David et al.) and using a version of multiplicative weights on the finite class (by Ahmadi et al.).
What is the technical contribution of this paper?
The writing of the paper could be improved. I understand why the proofs should go through, but some definitions are really confusing.
See the next section.
Technical Quality: 3
Clarity: 2
Questions for Authors: There are many parts of the paper where the writing/definitions are confusing.
Paragraph on the manipulation rule: $h$ denotes the classifier or the loss function? Why does it make sense to define the best response on points where $h$ returns $1$? I don't see how it compiles if it's not the loss function.
It's not clear if it is consistent with lines 230-235: how do you define false positive and false negative? with respect to $h$ or the loss of $h$?
Line 71 ״under the respective particular parametrization״ what does it mean?
I understand what the line afterward means, but line 71 is not clear to me.
Lines 94-95: "First, to construct the set of representative challenges". Is it a typo?
Minor:
Line 34: "decision maker has little or no prior knowledge about the population being classified", this is also the case where the learner has no prior knowledge of the distribution.
Sometimes the learner is also referred to as the decision maker (mostly in the intro), it's worth making it clear.
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: The limitations are addressed properly.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your the detailed review and valuable feedback.
> Technical contribution
We respectfully disagree that our paper lacks technical novelty because it applies ideas from standard online learning. While we build on established methodologies such as the Littlestone tree, SOA learning rules, and the agnostic-to-realizable reduction via experts, these are foundational philosophies in learning theory literature that require different technical insights when applied to different settings. An active and expanding line of work have also been applying these methodologies to understand the learnability and optimal rates in various contexts, such as the multiclass Littlestone dimension [Daniely et al., COLT’11], Natarajan-Littlestone dimension [Kalavasis et al., NeurIPS’22], the randomized Littlestone dimension [Filmus et al., COLT’23], the VC-Littlestone tree [Bousquet et al., STOC’21], to name a few. While these results share common philosophies, each presents unique technical challenges and requires new insights.
In the strategic classification context, one significant challenge for us is the information asymmetry between the learner and adversary regarding the true features before manipulation. This challenge is not addressed by previous works on non-strategic learning where the true features are always observable. Below we outline our main technical contributions in addressing these challenges:
- **Structural insights.** As discussed in Lines 225-242 of Section 3, our construction of the strategic Littlestone tree accounts for the asymmetric information carried by false positives and false negatives by creating an asymmetric number of branches. We also introduce a carefully designed consistency rule to ensure realizability of the adversary’s choices.
- **Lower bounds:** While constructing a sequence of post-manipulation observations that the adversary wishes to induce to the learner is straightforward, a critical challenge is in finding a realizable sequence of initial features before manipulation. We address this by combining our new consistency rule with a novel reverse-engineering technique.
- **Upper bounds:** As discussed in Lines 285-290, the main challenge in proving the upper bound is to construct a classification rule that simultaneously satisfies the “progress on mistakes” property for all features in the space. Since each layer of the SL tree inspects the entire neighborhood of a certain node, one cannot optimize each node independently as in non-strategic settings. We resolve this by favoring positive labels whenever false positives decrease the SLDim, a technique also rooted in the asymmetric information carried by false positives vs negatives.
- **Agnostic reduction:** As discussed in Lines 88-93, our expert set needs to effectively guess the direction from which each agent moves from. We also extend this technique to the unknown graph setting.
We will add a more detailed discussion about these technical challenges and contributions in the revisions of our paper.
> Manipulation rule, false positives and false negatives
Implicit in our model is that all agents prefer positive labels over negative labels. While we use $h$ to denote the learner’s choice of classifier, you’re correct that agents’ loss function can also be written in terms of $h$. Specifically, under classifier $h$, an agent with true features $x$ and post-manipulation features $v$ incurs a loss of $-h(v)-\infty\cdot\mathbf{1}\{(x,v)\not\in E\}$, where the second term is to ensure feasibility of manipulation from $x$ to $v$. To minimize this loss, agents will choose $v$ such that $h(v)=+1$ if possible. That’s why we define the best response on nodes where $h$ return $1$. We will clarify this assumption explicitly in future versions.
If an agent $(x,y)$’s true label $y$ is negative but she manipulates her features to receive a positive classification $h(v)=+1$, the learner has made a “false-positive mistake”. In this case, the observation available to the learner is represented as the pair (agent’s observable features, agent’s true label)$=(v,-1)$. Conversely, if $y=+1$ but no neighbor of $x$ is classified as positive by $h$, the agent will not manipulate and will receive a negative label $h(x)=-1$, resulting in a “false-negative mistake”. The learner’s observations are denoted by the pair (agent’s observable features, agent’s true label)$=(x,+1)$.
> Line 71 ״under the respective particular parametrization״
Thank you for pointing this out, we are sorry for the confusion. What we meant is that the (near)-optimality of the mistake bounds in previous work [Cohen et al., Ahmadi et al.] are established under the assumption that the mistake bound must be parametrized as a function on the Littlestone dimension and/or the graph’s out-degree. These works show optimality in a specific instance in which the bounds cannot be improved when parameterized in these particular ways. However, these bounds are not tight for all instances, especially when the parameters such as LDim or out-degree are infinite. We will clarify this in more detail.
> Typo in Lines 94-95
Yes, this is a typo. We meant to write “to construct the set of representative experts”.
> Learner’s prior knowledge
While it is true that in the stochastic/offline setting with an unknown distribution, the learner has no prior knowledge about the population being classified, we want to emphasize that this is still easier than our online setting because even though the distribution is unknown, the learner knows that that there exists an underlying distribution, allowing her to learn a good classifier by estimating the distribution. In contrast, in the online setting, agents are adversarially chosen on-the-fly and do not come from any prespecified distribution.
> Learner vs decision maker
Yes, we will make it clear in future versions of our paper.
---
Rebuttal Comment 1.1:
Title: Raising my score
Comment: Thanks for your response.
I believe the contribution is sufficient for acceptance.
Please make the relevant changes in the presentation so it will be easier to understand the main definitions.
---
Reply to Comment 1.1.1:
Comment: Thank you for raising the score! We will incorporate your suggestions into revisions of the paper to make the presentation more clear. | Summary: The authors continue the study of strategic classification in the online setting, where an agent can manipulate the instance features to potentially force a positive prediction (governed by a manipulation graph). In the *realizable* case, the authors provide a strategic variant of the Littlestone dimension and the SOA algorithm yielding the exact instance-dependent mistake rate. This improves upon degree-dependent mistake bounds of previous work. They also consider two individual additional directions: the agnostic case and the case when the manipulation graph is unknown. In the first they improve upon existing regret bounds, while in the latter they state novel upper bounds almost matching existing lower bounds.
Strengths: Well written, interesting setting, and natural continuation ob previous work.
Tight characterization of the realizable setting.
--- rebuttal ---
updated from 6 to 7
Weaknesses: Strength of the agnostic result is a bit unclear / open (see questions / limitations below).
Technical Quality: 4
Clarity: 4
Questions for Authors: How does Thm 4.1. (the agnostic regret bound) relate to Prop. 30 in Cohen et al. [2024]? Moreover, are there somewhat tight (say up to log factors) lower bounds on the agnostic regret bound? In particular is the $\mathcal{O}(\Delta\cdot\mathrm{SLdim})$ term necessary in the upper bound (Thm 4.1)? (From your remark I understood that only $\Delta\cdot\mathrm{OPT}$ and $\mathrm{SLdim}$ are known lower bounds).
Is the assumption that $X$ is discrete a strong one/necessary? We can just model the "manipulation graph" on an arbitrary domain $X$ as a function $f:X\to P(X)$, where $f(x)$ is the set of allowed manipulations of $x$. Are there any difficulties generalizing here?
Is a "symmetric" setting possible where the agent can (adversarially) modify $x$ to $v\in N[x]$ no matter the label $h(x)$?. The motivation could be that some agents want to be strategically classified as $-1$ while others as $1$.
Confidence: 3
Soundness: 4
Presentation: 4
Contribution: 2
Limitations: The achieved agnostic regret bound is not compared to the agnostic one in Cohen et al. [2024, Prop 30], or perhaps I missed it, see question above. Please clarify as this would help to judge the novelty and strength of the result.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the positive feedback and the insightful comments.
> Comparison of our agnostic regret bound (Thm 4.1) to that of Cohen et al. (Prop 30), and whether there are tight lower bounds
Thank you for your question. Our agnostic algorithm that achieves the mistake bound in Thm 4.1 only requires observing post-manipulation features after committing to a classifier at each round. In contrast, Prop 30 of Cohen et al. requires the learner to observe pre-manipulation data before choosing a classifier and then observe post-manipulation data afterwards. This makes their setting strictly easier than ours. We will add a remark about this comparison to revised versions of our paper.
About lower bounds in the agnostic setting, we agree that both $\Delta\cdot\text{OPT}$ and $\text{SLDim}$ are valid lower bounds but it’s unclear whether the $\Delta\cdot\text{SLDim}$ term is necessary. It remains an important open question to derive lower bounds for this agnostic setting.
> Does the domain $\mathcal{X}$ need to be discrete?
No, the assumption on the discrete domain $\mathcal{X}$ is not necessary. We totally agree that our results can be immediately extended to any domain with arbitrary predefined manipulations characterized by some abstract function $f$. We will clarify this in revisions.
> Symmetric setting where agents can adversarially modify their features to the neighboring features
We agree that your proposed setting is also well-motivated, though it takes a different approach compared to our work. Our focus is on strategic modifications, which can be viewed as a special case of the fully adversarial setting that you mentioned. The setting you described–where each $x$ can adversarially move to any neighbor in $N(x)$ regardless of the label–has also been explored in the adversarial robustness literature, particularly in the context of offline learning where dat appoints are sampled from a fixed distribution (see, eg, [Montasser et al., NeurIPS’22]). Another related model (also in the offline setting) that reflects your motivation is proposed by [Sundaram et al., ICML’21] which introduces another parameter $r\in\mathbb{R}$ that indicates how much a data point prefers label $+1$ over $-1$. In this model, a negative $r$ implies that some agent wants to be classified as $-1$ rather than $+1$. Although our current approach does not generalize immediately, we believe that it is an interesting direction to bring either model to the online learning setting.
---
Rebuttal Comment 1.1:
Comment: Thanks for the clarification, I raised my score. This is a timely paper studying an important problem.
---
Reply to Comment 1.1.1:
Comment: Thank you for raising the score! | Summary: The paper tackles online binary classification where agents manipulate observable features for positive outcomes. It introduces the Strategic Littlestone Dimension (SLD), a new measure capturing the complexity of the hypothesis class and manipulation graph, demonstrating its role in achieving optimal mistake bounds for deterministic algorithms in the realizable setting. It also improves regret bounds in the agnostic setting by refining agnostic-to-realizable reductions and addressing unobserved original features. Additionally, it relaxes the assumption of a known manipulation graph, deriving regret bounds for scenarios with imperfect graph knowledge in both realizable and agnostic settings.
Strengths: The problem is well-motivated and interesting. The paper is also well-structured. The result based on the notion of the Strategic Littlestone Dimension is tight, with matched lower and upper bounds. Although I did not check the proof, the results of this paper appear to be correct.
Weaknesses: My main concern is twofold:
1. The paper primarily concerns deterministic learning algorithms, while most robust algorithms dealing with adversaries are stochastic, which greatly limits the practical relevance of the paper.
2. The computational complexity of Algorithm 1 is not clear. Line 3 of Algorithm 1 looks computationally expensive. Can authors comment on the computational complexity of the algorithm?
Technical Quality: 2
Clarity: 2
Questions for Authors: See weaknesses.
Confidence: 2
Soundness: 2
Presentation: 2
Contribution: 3
Limitations: The authors did discuss the limitations of their work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your questions and comments.
> Weakness 1: deterministic learning algorithms
While we agree that many robust algorithms rely on randomization to deal with adversaries, we want to highlight that in the context of strategic classification, there are important scenarios where the learner must use deterministic algorithms due to legal or regulatory requirements. An example is college admissions, where institutions are required to publish clear and transparent decision criteria (aka classifiers), as randomization could be perceived as arbitrary or unfair, weakening the trustworthiness of the admissions process. This motivates our study of deterministic algorithms and highlights its practical relevance. We will add this note to the revised paper.
In addition, characterizing the minmax rates for randomized algorithms is highly nontrivial. In Appendix E, we have constructed instances for all $\Delta$ where the deterministic minmax bound is $\Omega(\Delta)$ but the randomized minmax bound is $O(\log\Delta)$. This reveals a super constant gap that did not exist in the non-strategic setting.
On the technical front, one of the many challenges of establishing a lower bound for randomized algorithms is that the adversary's ability to reverse-engineer the post-manipulation features to obtain pre-manipulation ones relies on “looking ahead” at the learner’s algorithm. However, if the learner uses randomness, the adversary cannot fully control which mistakes will occur or what information they provide to the learner. This challenge is deeply rooted in the information asymmetry inherent in the strategic setting.
> Weakness 2: computational complexity of Algorithm 1
While we acknowledge that Algorithm 1 can be computationally expensive to implement, we want to remark that our primary focus is on the *statistical complexity* of learning in the presence of strategic manipulations, rather than the computational complexity. It is often the case that statistically optimal algorithms become computationally intensive — this is also true in the traditional (non-strategic) setting, where the SOA algorithm that enjoys minimax optimal mistake bound is also computationally expensive, and there are even computational hardness results for learning certain classes [Hazan and Koren, STOC’16]. We believe that exploring the tradeoff between computational and statistical complexity is an interesting direction for future research.
---
Rebuttal Comment 1.1:
Comment: Many thanks for your response. | null | NeurIPS_2024_submissions_huggingface | 2,024 | Summary: This paper considers the problem of online binary classification when each data point can strategically manipulate its features in a discrete way which is captured by a manipulation graph.
Protocol and regret: In each round, the learner picks a deterministic classifier $h_t$, and then the data point $(x_t,y_t)$ arrives, and manipulates its feature according to graph $G$ from $x_t$ to $v_t$. The learner receives only $v_t$ and incurs loss $\mathbb{1}[h_t(v_t) \neq y_t]$. The regret is measured with respect to the best fixed $h \in \mathcal{H}$.
**contributions**
1. They introduced an elegant notion of Strategic Littlestone Dimension $\mathrm{SLdim}(\mathcal{H}, G)$, which is based on hypothesis class $\mathcal{H}$ and manipulation graph $G$
2. They showed that $\mathrm{SLdim}(\mathcal{H}, G)$ fully characterizes the optimal mistake bound for a realizable case, by showing a lower bound on the mistake using a strategic counterpart of shattering tree and an upper bound using a strategic version of SOA. (Note: The realizable case here means there exists some $h \in \mathcal{H}$ with no mistakes on the manipulated data. so $h$ does not necessarily need to correctly classify the actual data.)
3. They extended their results for the agnostic case and showed an improved upper bound compared to previous works. They used the idea of approximating the whole class $\mathcal{H}$ using a finite set of representative experts. (Their benchmark is $\Delta_G^+ \cdot OPT$, where $OPT$ is the error of the best hypothesis and $\Delta_G^+$ is maximum out-degree of graph $G$)
4. Finally, they designed algorithms with regret upper bound for the setting where the manipulation graph is unknown to the learner. They consider two cases: (i) all data points use the same unknown manipulation graph from a set of graphs, (ii) each data point uses a separate manipulation graph (case (ii) better captures those real-world applications in which different data points might have different capabilities for manipulation.)
Strengths: This is a very strong paper
- Wring is clear easy to follow and well cited. Math notations and technical proofs are also clean and clear.
- Significance: the notion of strategic Littlestone dimension: This is a very valuable extension of the Littlestone dimension which characterizes the optimal mistake-bound --> it improves our understanding of the complexity of online binary classification with the presence of strategic behavior
- Extension for unknown manipulation graph is valuable and I suspect further research by a broad range of researchers in that line
- Some of the previous results relied on knowledge about both pre-manipulation data and post-manipulation data (e.g. [Cohen et al. 2024]) but this paper uses only post-manipulation data
- A detailed comparison with previous works in the appendix for the realizable case (I didn't closely check all the details)
Weaknesses: I don't find any major weakness at all.
Technical Quality: 4
Clarity: 4
Questions for Authors: 1. In line 101, you mentioned that, your approach yields an improved bound for the agnostic setting. Can you please provide more details about this improvement? I believe you are comparing the upper bound in Theorem 4.1. with an upper bound from previous work.
2. In your setting, you assumed that each strategic data point only gets manipulated for a positive classification and if no manipulation leads to a positive classification, then the data point remains unchanged. Is this a necessary assumption for your results? In other words, I was wondering, do you anticipate that your result can be easily extended to the setting where all data points (even those without a chance of getting positive classification) have the potential to be manipulated?
Confidence: 3
Soundness: 4
Presentation: 4
Contribution: 4
Limitations: As mentioned by the authors in the checklist, this paper is theoretical with no direct implications. I completely agree with this.
However, I want to note that the motivation for strategic classification comes from real-world problems. I think the progress toward this direction by theoretical researchers and practitioners can only be beneficial for society. This paper improves our theoretical understanding of the complexity of this problem in a somewhat stylized case.
I can think of only one possible potential limitation (in a very high-level sense). Suppose a practitioner decided to use manipulation robust learning algorithms in their specific application. They need to model a set of manipulation graphs $\mathcal{G}$. If due to inaccuracy in the modelling by the practitioner, $\mathcal{G}$ only captures the certain type of manipulations done by group $A$ and does not capture the type of manipulations done by group $B$, then the resulting learning algorithm will be only robust with respect to a certain type of manipulation from group $A$. Therefore, group $B$ **might** have gained some unfair advantages. Hence, it is important to model the manipulation graph set $\mathcal{G}$ accurately.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the positive feedback on our work. We are happy to see that you find our notion of Strategic Littlestone Dimension elegant and valuable.
> Question 1: Improved bound for the agnostic setting
This improvement results from the comparison between Theorem 4.1 in our paper and the agnostic mistake bound in [Theorem 4.5, Ahmadi et al., 2023]. Both papers study the same setting of agnostic online strategic classification and design deterministic algorithms that achieve mistake bounds in terms of $\Delta\cdot$OPT. However, their result requires the hypothesis class $\mathcal{H}$ to be finite, whereas we only require finiteness of the strategic Littlestone dimension, which is a strictly weaker condition. We will add clarifications to revisions of our paper.
> Question 2: Data points with no feasible manipulation that receives positive label would stay unmanipulated
Yes, we do assume that data points that cannot manipulate to get a positive label would choose not to manipulate, and this is a necessary assumption for establishing our upper bound (whereas the lower bound would still hold if we remove this assumption). On the technical side, when false negative mistakes are made, this assumption enables us to identify the pre-manipulation features as the root of the strategic Littlestone subtree, and is crucial for establishing the “progress on mistakes” property. This assumption is also crucial for the previous upper bounds in [Ahmadi et al, Cohen et al.]. We believe that extending the results to the setting where all data points could manipulate is an interesting question for future research.
> The importance of modeling the manipulation graph G accurately
We completely agree that accurate modeling of the manipulation graph is crucial and that inaccurate estimations for certain graphs could lead to fairness issues. As a preliminary effort towards addressing unknown manipulation graphs, we allow the learner to run the strategic classification algorithm using knowledge about a potentially larger graph class that contains the true graph. The learner incurs only a logarithmic cost in the size of the graph class (assuming that taking union of the graphs does not significantly increase maximum in/out degree). Since this extra cost is only logarithmic, this (to some extent) allows the learner to adopt more conservative estimations of the manipulation that captures the types of manipulations from both groups. However, this approach focuses on the objective of minimizing the total number of mistakes rather than ensuring fairness among different groups. We believe the fairness implications that you described can be an interesting direction for future research.
---
Rebuttal Comment 1.1:
Comment: Thanks for your response. | null | null | null | null | null | null |
Differentially Private Optimization with Sparse Gradients | Accept (poster) | Summary: This paper explores differentially private optimization under the data/gradient sparsity assumption. For mean estimation with sparse data, it introduces new near-optimal bounds, improving previous results, especially in the high-dimensional setting. The corresponding lower bound is established using a novel block-diagonal construction. For DP ERM/SO with sparse gradients, the paper introduces a bias-reduced, randomly stopped SGD method, building upon their mean estimation mechanism. This approach achieves a nearly dimension-independent risk rate in the sparse gradient setting.
Strengths: * The paper is technically solid, with results covering both ERM and SO. In addition, some results also include lower bounds.
* The novel block diagonal construction of lower bound and the analysis of randomly stopped noisy SGD will benefit future research.
Weaknesses: The organization of the paper could be improved. For instance, adding more content to Section 6 and briefing the proof part in Section 5 would be beneficial.
Technical Quality: 3
Clarity: 3
Questions for Authors: Can the authors provide the running times analysis for Algorithms 2 and 3?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors address the limitation of this work in the checklist
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your positive assessment and feedback..
In the revision, we will reorganize the content. As you suggest, we will include results for DP-SCO from Section 6 (specifically from Appendix G.2), as well as a more extensive motivation and detailed proofs in Section 5.
__Questions__
Thank you for raising this question. Regarding the running times, for Algorithm 2 we need to compute $2^{N+1}+1=O(n)$ gradients (where N is truncated geometric), and apply Gaussian mechanism and projection over the $\ell_1$-ball four times (these two algorithms run in time nearly linear in the dimension $d$). Overall, the gradient complexity is $O(n)$ and the computational complexity is $O(nd)$.
For Algorithm 3, we can use Lemma 5.3 to provide an upper bound on the expected number of iterations (which is $O(n)$), and then multiply this number by the worst-case estimate of the complexity of Algorithm 2. This provides an in-expectation bound for the complexity of the algorithm.
We will make sure to add these details in the revision.
---
Rebuttal Comment 1.1:
Title: Official comment by reviewer L9WR
Comment: Thank you for answering my question. I will keep my score. | Summary: The paper studies differentially private optimization under the sparse gradient assumption. The paper first considers private and sparse mean estimation. Both lower-bounds and nearly matching upper bounds are established. The paper then use the mean estimation algorithm to construct private gradients in DP-SGD and obtain new dimension independent rates for both DP-ERM and DP-SCO under convex and non-convex settings of the objective function.
Strengths: A crucial issue in differentially private optimization is that the utility rates depend on the dimension. This prevents the use of DP techniques to large models whose dimension scales to billions. The paper contributes to an important line of work in DP that tries to develop dimension free error rates under structual assumptions of the problem. Both the settings that gradients are sparse and the theoretical results under the sparsity assumption are novel and new.
Weaknesses: 1. There are several missing related works that should be included. [1-7] all derived dimension independent rates for DP optimization under various settings. [1] relaxed the dimension to a dependence on the rank of the feature matrix. [2, 5] studied the relaxed Lipschitz condition. [3,4] relaxed the dimension to a dependence on the trace of the Hessian. [6, 7] studied the semi-feature setting. Although they are not based on sparse gradients, I think it help to better position this paper when including all these existing dimension free rates in the DP optimization literature. Specifically, the assumption in [2, 5] is also only on the gradients, and [5] also develops an exponential mechanism that could be used for the pure DP case (though they did not discuss). In addition, [8] also studied multi-level Monte-Carlo for stochastic optimization.
2. All previous dimension free rates [1-5] are for the unconstrained case. As discussed in [1, 2], there seems to be a separation between the constrained and unconstrained settings. For unconstraiend case, with additional assumptions, both dimension free upper and lower bounds can be established. However, the dimension dependent lower bound in [Bassily, 2014] is constructed for a constrained generalized linear loss, which also applies to the case where gradients have additional structure. See more discussions in [2, 5]. Results in this paper seem to be contradictory to these related works, where dimension free rates exist for constrained optimization. Can authors explain why and provide more insights?
3. Other questions: (1) In the ideal case, the results for the sparse case should recover the non-sparse case when s=d. However, there seems to be a large gap; (2) In the algorithms, the knowledge of the sparsity s is required. How to determine s in practical applications and is it possible to design algorithms without knowing s? (3) Some assumptions overlap with each other. For example, if f is L-Lipschitz and domain is bounded by D, this already implies that $|f(x) - f(y)|\leq LD$; (4) In private optimization, we need to add noise to gradients, which makes gradients not sparse any more. How hard is it for the sparsity assumption to hold along the trajectory? Can sparsity be preserved for all x_t?
[1] Evading the Curse of Dimensionality in Unconstrained Private GLMs. AISTATS, 2021.
[2] When Does Differentially Private Learning Not Suffer in High Dimensions? NeurIPS, 2022.
[3] Dimension Independent Generalization of DP-SGD for Overparameterized Smooth Convex Optimization. arXiv, 2022.
[4] DPZero: Private Fine-Tuning of Language Models without Backpropagation. ICML, 2024.
[5] The Power of Sampling: Dimension-free Risk Bounds in Private ERM. arXiv, 2024.
[6] Deep learning with label differential privacy. NeurIPS, 2021.
[7] On Convex Optimization with Semi-Sensitive Features. COLT, 2024.
[8] On the Bias-Variance-Cost Tradeoff of Stochastic Optimization. NeurIPS, 2021.
Technical Quality: 3
Clarity: 2
Questions for Authors: See weaknesses.
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: See weaknesses.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the valuable feedback.
1. Thank you for the references; we will make sure to add them to properly position our work within the field.
2. Unfortunately, it is not true that dimension-free rates are known only for unconstrained settings. In fact, that there are works providing (nearly) dimension-free rates for the case of polytope feasible sets [22,23:Talwar, Thakurta, Zhang], [24: Asi, Feldman, Koren, Talwar] and [25: Bassily, Guzman, Nandi]. More importantly, there is _no contradiction with the lower bounds in BST14_, since their construction of fingerprinting codes (and packings for the pure case) uses _dense_ vectors. In fact, the key observation behind our sparse DP lower bound is that the sparsity constraint weakens this construction, and a block diagonal construction with blocks as in BST14 yields a nearly optimal lower bound. Note too that polynomial-in-the-dimension lower bounds are still obtained for large enough sample size $n$, showing a smooth transition between the different regimes.
3. Other questions
1. For mean estimation (Table 1), our upper bounds smoothly transition between the different regimes. For DP-SCO and DP-ERM (Table 2), it is also possible to obtain this transition (simply by selecting the algorithm with the best rate depending on the instance parameters). In the original table we decided to include only the new high-dimensional rates, but we have now added the different regimes (see the attached file). We apologize if this caused confusion.
2. Each specific application may have an ad-hoc way of estimating $s$. For example, in embedding models, the sparsity (for the embedding layer) will be the number of columns. Alternatively, $s$ can be a hyperparameter (this does not compromise privacy, as long as the hyperparameter selection is done privately). Designing algorithms that work without knowing $s$ is an interesting question for future research---we will add this to the revision.
3. Regarding the overlap in Lipschitzness and boundedness assumptions, we opted to keep this (sometimes redundant) parameterization since we are combining different assumptions for different results. Thank you for pointing this out.
4. There might be a potential misunderstanding here; the gradient sparsity assumption is made over the raw _noise-free_ gradients. Any noise addition (or more general DP procedure) applied to these gradients does not break the sparsity assumption. Please let us know if this resolves your question.
---
Rebuttal Comment 1.1:
Comment: Thanks for your response!
I think a better understanding regarding the separation between constrained and unconstrained case mentioned in papers [1,2,5] is valuable to the community. Please check their arguments carefully and include a detailed discussion in the next version of the paper.
In my understanding, if the gradients are always sparse and the support is fixed, then dimension-independent rates are natural regardless of the constrained sets as one can transform this d dimensional problem to an s dimensional one. However, if the support is not fixed, I am not clear what will happen. For the sparsity assumption, let me clarify my question. What I am confused is that $x_t$ can always be dense since we add noise to make it DP. Then how strong is this gradient sparsity assumption evaluated on every dense $x_t$?
Anyway, I don't have other problems. I increase the score to 5, which is on the positive side. I am willing to support the acceptance of the paper.
---
Reply to Comment 1.1.1:
Title: Further responses
Comment: Thank you for engaging in further discussions. We proceed to answer your comments/questions.
*[..] Please check their arguments carefully and include a detailed discussion in the next version of the paper.*
We appreciate the provided references, and we will make sure to incorporate them in the next version.
*In my understanding, if the gradients are always sparse and the support is fixed, then dimension-independent rates are natural regardless of the constrained sets as one can transform this d dimensional problem to an s dimensional one. However, if the support is not fixed, I am not clear what will happen. For the sparsity assumption, let me clarify my question. What I am confused is that $x_t$ can always be dense since we add noise to make it DP. Then how strong is this gradient sparsity assumption evaluated on every dense $x_t$?*
We agree that the fixed support sparse gradient case is straightforward. We also agree that $x_t$ (and furthermore, its minibatch gradients) can be fully dense. However, this is not a contradiction as our assumption is imposed on the ***individual gradients at arbitrary points***. Now, if your question is directed to how strong is in practice to have sparse gradients at dense $x_t$, our answer is that for our applications of interest is not strong. E.g. for embedding models the sparsity arises from the embedding of categorical features, hence it works regardless of the iterate. We hope this resolves your question.
*[..] I increase the score to 5, which is on the positive side. I am willing to support the acceptance of the paper.*
Thank you for taking our feedback into account. | Summary: The paper provide algorithms for DP optimization with sparse gradient, proving both upper bounds and lower bounds, which are almost match.
Strengths: 1. The paper has a nice presentation, starting from sparse mean estimation upper bounds and lower bounds and then go into ERM with sparse gradients and deal with bias issues introduced by the projection estimator.
2. The upper bounds and lower bounds almost match.
Weaknesses: 1. It is a pure theory paper, it would be more interesting to see if the algorithm can inspire improved practical applications.
2. The content is a bit dense, without summary or conclusion sections.
Technical Quality: 3
Clarity: 3
Questions for Authors: The proposed algorithms seems not too hard to use in practice, could the authors comment whether they can be used practically? Or if the authors tested them in practice, it would be interesting to see some results.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Limitations are well covered in the content.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your review. We will now elaborate on the comments.
Thank you for your comment on practical applications. In this regard, the idea of gradient sparsity has already impacted practical DP optimization. The most immediate references in this respect are [6: Zhang, Mironov, Hejazinia] and [7: Ghazi, Huang, Kamath, Kumar, Manurangsi]. Regarding the specific contributions of this work, we believe that significant components of our results can inspire practical improvements. First, the projection mechanism and compressed sensing approaches are easy to apply and should lead to better statistical performance in mean estimation than, e.g., the sparse vector technique (which is the main approach in [6,7]). Second, our regularized output perturbation approaches for DP-SCO (Section G) are easy to implement. Third, towards neural network applications (such as embedding models), the computational benefits of sparsity are more significant under structured sparsity, e.g., in the case of embedding models we need the sparsity to operate at the level of rows of the embedding matrix. We believe that ideas used in the context of group or structured sparsity can be of use here, but this is out of the scope of the current submission. Finally, a major roadblock for practical applications is the heavy-tailed nature of the bias-reduced gradient estimator used for SGD. While the algorithm in its current form might be less practical, we hope our work may inspire further ideas and lead to more practical methods.
We apologize for the current dense content; in the revision, we will try to rewrite to make it more accessible.
__Questions__
As we mention above, the bias reduction method may be hard to implement in practice. Particularly, the heavy-tailed nature of the gradient estimator may pose convergence challenges. Boosting might mitigate this, but we are interested in continuing investigating alternative approaches that can be more practical.
---
Rebuttal Comment 1.1:
Comment: Thank you for the response. I will keep my score. | Summary: This paper addresses the problem of DP-Convex optimization and DP-SCO in scenarios where the gradient of each individual sample is $s$ sparse. The main question explored is how sparsity can help improve the known rates for DP convex optimization.
The main contributions of the paper are as follows:
DP Sparse Mean Estimation of High-Dimensional Vectors: The authors investigate DP mean estimation under the assumption of sparsity. They use the projection mechanism from Nikolov et al. to propose algorithms for both pure-DP and approximate-DP settings. The error of the algorithm scales with $s$ when the number of samples is moderate. The main results are presented in Theorems 3.2 and 3.3.
Lower Bound for Mean Estimation under Sparsity: The authors provide lower bounds for mean estimation under sparsity. Note that due to the reduction in BST14, the lower bound on mean estimation can be translated to a lower bound on DP-ERM.
Algorithms for DP-ERM under Sparsity: The authors propose an interesting algorithms for DP-ERM under the sparsity assumption. To compute gradients, they use their proposed algorithm for mean estimation as a black-box. Unlike DP-SGD, the gradient in this case is "biased." To address this issue, the authors introduce an interesting approach using random batch sizes with an exponentially increasing schedule to achieve a bias similar to that of a full-batch algorithm.
Strengths: I think the paper addresses an important problem with practical relevance. I think the paper is complete in the sense that the authors provide near complete story of optimization under sparsity at least in terms of achievable rates. The idea of behind the optimization algorithm is interesting.
Weaknesses: The main drawback of this work is its presentation. Some of the proof is very difficult to parse. I have difficulty understanding the many algorithmic choices behind Algorithm 2. In particular, the idea of using random batch size, fully adaptive DP mechanism, specific distribution for batch size are not clear to me. I think the authors should provide a discussion on the necessity of these particular algorithmic choices.
Technical Quality: 3
Clarity: 2
Questions for Authors: I can't fine the results regarding DP-SCO in the paper. Maybe I am missing something in the paper.
1- Is there any way to extend the results to the case with approximate sparsity?
2- what is p_n in algorithm 2?
3- What is the importance of random batch size?
4- Theorem A.6 seems to have a typo?
5- Line 641 to 642, 1[T>=t] has been changed to 1[T<=t]. It is not clear to me? Step 1 needs more clear explanation.
6- What is the shortcoming of “equal” privacy budget allocation as in the BST14 paper?
7- What is the issue with using full-batch for computing the gradient?
8- What is the definition of $\mathcal{F}_{t-1}$ in prop. E.1?
9- Line 644, it is not clear to me why conditional expectation of bias and variance are also bounded by b and v.
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to thank the reviewer for the valuable and detailed feedback.
Firstly, we apologize for the lack of clarity of the algorithmic choices and the proofs in our submission. Here is an overview of the choices behind Algorithm 2. As discussed in the paper, the near-optimal mean estimation algorithms we introduce are _biased_. Simply using SGD with this mean estimation procedure for minibatch gradients would result in a _polynomial degradation_ of the rates (see, e.g., [6: Zhang, Mironov and Hejazinia], who carry out this approach). To handle this issue, we propose a _bias-reduction method_, which closely follows the approach in [14: Blanchet and Glynn]. This randomized method produces an estimator that telescopes in expectation, resulting in bias which scales as the one of the largest possible batch, but with minibatch sizes that are typically much smaller. The exponential range of the batch size follows from the telescoping idea.
The bias-reduction approach is beneficial for privacy, because when minibatches are smaller, one can leverage _privacy amplification by subsampling_. However, since batch sizes are random, the classical advanced composition DP analysis does not suffice. Here is where we use the _fully adaptive composition_ theorem [15: Whitehouse, Ramdas, Rogers and Wu]: in particular, since our batch randomization is predictable, we can define stopping times in such a way that we do not exceed the privacy budget. This however introduces challenges in the SGD analysis, which are addressed by the _boosting_ procedure.
We will include a better detailed overview of this algorithm and its analysis in the revision.
__Questions__
About DP-SCO results. Due to space limitations, all results for DP-SCO were deferred to Appendix G. In the final version of the paper we will include the main results from that section (particularly, Appendix G.2).
1. Yes. As pointed out in Remark 2.1, in all of our upper bounds we can replace the set of sparse vectors by a scaled $\ell_1$-ball. This set is a more robust way to quantify sparsity, as known from the compressed sensing literature.
2. $p_n$ is the probability of choosing a batch size $2^{n+1}$, i.e., $p_n=C_M/2^n$ (where $C_M$ is a normalizing constant). This is explained in lines 233-236, but we will make it explicit in the pseudocode to avoid confusion.
3. Random batch sizes allow bias reduction with smaller minibatches, which are amenable for privacy amplification by subsampling. Using a full batch estimator would incur similar bias, but no privacy amplification. We will make sure to expand on this important aspect.
4. Thank you. We found an incorrect indexing: $[1:t-1]$ instead of the correct $[0:t-1]$.
5. Apologies for causing confusion: we simply wrote the same event in two different ways, i.e., $\{t\leq T\}$ and $\{T\geq t\}$. In the revision, we will retain the same notation.
In the first step we use the regret bound for biased SGD, and in the second step we write the finite sum from $0$ to $T$ as a series with zero terms (due to the indicator) for $t>T$. Please let us know if more clarification is needed.
6. We are not sure which part of the paper you are referring to. Can you please let us know, and we can clarify it accordingly?
7. Please see answer 3 above.
8. $\\{ \cal F_t \\}_{t\in\mathbb{N}}$ is the natural filtration, i.e., ${\cal F}_t = \sigma((x^s), {s \leq t})$. We will add a clarification of this in the revision.
9. The sources of randomness used in Lemma 5.2 are only the batch size r.v. N and the DP noise used to compute ${\cal G}(x)$. While we could have stated Lemma 5.2 for an iterate $x^t$ (and correspondingly condition on ${\cal F}_{t-1}$), we believe the current presentation of this lemma is cleaner, and avoids unnecessary notation.
---
Rebuttal Comment 1.1:
Comment: Thanks for the detailed response!
Regarding question 6 in my initial review: the "usual" analysis of DPGD is based on the following: assume we want to run the algorithm for $T$ iterations and the privacy budget is $\rho$-zCDP. Then, we set the noise variance such that the privacy budget at each iteration is $\rho/T$. I want to better understand the shortcomings of this approach compared to the proposed method in the paper.
---
Reply to Comment 1.1.1:
Title: Answer to question 6 by reviewer 9vLf
Comment: Thank you for the clarification. Before answering, please keep in mind that for Noisy SGD algorithms the minibatch gradients estimators are unbiased, so what we discuss next is specific to the sparse setting.
In the sparse setting we only have at our disposal (minibatch gradient) mean estimation algorithms whose bias scales as $1/\sqrt{B}$ where $B$ is the batch size. Our main limitation then comes from this bias (which does not vanish with a larger number of iterations); moreover, note that the benefits of privacy amplification by subsampling take place for smaller values of $B$. Hence these two effects pose a strong privacy/accuracy tradeoff.
Regardless of the batch size, and as you correctly point out, we also need to take into account the effect of composition. Optimizing on both batch size and number of steps, one can see that the biased SGD algorithm will converge with a rate which has polynomial gaps in $1/\varepsilon$ and $s$ (compared to the lower bounds); see, e.g. the work of Zhang, Hejazinia and Mironov [6]. By contrast, our bias reduction approach leads to smaller bias but can still benefit from privacy amplification by subsampling. This leads to various technical challenges which are the core of our analysis in Section 5.
We hope this resolves your question and allows you to better appreciate our technical contributions. | Rebuttal 1:
Rebuttal: Updated table incorporating the low and high dimensional rates for DP optimization.
Pdf: /pdf/501ed5de0bfebdfc3652a49dbc69d6581b1f264a.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Treeffuser: probabilistic prediction via conditional diffusions with gradient-boosted trees | Accept (poster) | Summary: The paper proposes Treeffuser, a nonparametric method for modeling the output distribution. Treeffuser learns a diffusion model using gradient-boosted trees, and uses conditional diffusion models to produce a distribution over the response variable given an input vector. Since Treeffuser user gradient-boosted trees, it is adept at providing accurate predictive distributions for tabular data problems. Experiments on toy examples, real-world datasets, and a case study highlight the flexibility and effectiveness of Treeffuser for accurately modeling the output distribution for a wide range of use cases.
Strengths: - The experimental results section demonstrates Treeffuser's effectiveness on complex contrived examples, real-world tabular regression datasets in comparison with existing methods, and a realistic case study.
- The paper is well-written and relatively easy to follow.
- The proposed approach is very flexible and nonparametric, enabling it to model any type of response distribution. This aspect is especially valuable for tabular regression problems, in which GBDTs do not inherently provide any type of uncertainty in their predictions.
- Treeffuser works well even with default hyperparameters.
Weaknesses: - As the authors note, Treeffuser must solve a stochastic differential equation to generate samples, which can become expensive if there are a large number of example to predict and each prediction requires the generation of a large number of samples.
- No complexity or empirical runtime analysis is provided. This type of analysis would be greatly beneficial to readers and practitioners to help them decide if and when to use Treeffuser for their particular problem.
- The comparison to other methods (Section 5.2) could be improved. For example, iBUG's $k$ hyperparameter is tuned using values [20, 150]; however, $k$ is a critical hyperparameter, and increasing the number of potential values may significantly increase iBUG's performance. Additionally, iBUG can non-parametrically model the response using kernel density estimation (KDE) which may improve its performance on datasets in which the output is not expected to be Gaussian.
Technical Quality: 3
Clarity: 3
Questions for Authors: - Do the authors have negative log likelihood (NLL) results?
- I'm surprised Treeffuser performs better than iBUG for point predictions (Table 4) since iBUG is simply using the underlying GBDT model, do the authors have a hypothesis as to why Treeffuser performs better than XGBoost in terms of point performance?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 4
Limitations: Yes, the authors address the limitations of their work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: # Rebuttal Reviewer knMV
We thank knMV for their review. We are encouraged that the reviewer found the paper well-written and easy to follow and that the author found our approach very flexible and valuable for tabular regression problems with UQ. Below, we provide additional results and discussion that answer the questions and concerns that the reviewer presented. The new experiments definitely strengthened the paper!
## Weaknesses
**W1. Treeffuser must solve a stochastic differential equation to generate samples, which can become expensive**
**W2. No complexity or empirical runtime analysis provided**
Thank you for the great comments. In the paper, we provided some runtime analysis of Treeffuser lines 257-261. Yet, we agree that more runtime analyses are helpful for readers and practitioners. So we generated three new figures running Treeffuser with default parameters on a 2020 MacBook Pro.
1. [Figure 1](https://imgur.com/a/Szo5ihU): runtime in seconds for training Treeffuser on subsets of different sizes of the M5 dataset (section 5.3). The shape of the training data is indicated on the plot. Error bars represent the standard error over five runs of Treeffuser.
2. [Figure 2](https://imgur.com/a/CzsPy1Z): runtime in seconds for training Treeffuser on the other datasets used in the paper. Error bars represent the standard error of over five runs of Treeffuser.
3. [Figure 3](https://imgur.com/a/5RQbM77): average runtime in seconds for producing one sample after training Treeffuser. The average is computed by sampling five thousand points from Treeffuser using its default settings of 50 discretization steps.
From these results, we find that:
- Treeffuser is very fast at fitting moderately sized datasets, with runtime increasing linearly with dataset size;
- sampling a single point is very fast ($\approx 10^{-4}$ seconds) and can be parallelized;
- yet, drawing many samples can take more time, e.g., with a large test set.
We also agree with the suggestion of including a complexity analysis and will add a detailed discussion in the appendix. To be brief here, the complexity for training is the same as for LightGBM with the subtelty that the dataset size is multiplied by `n_repeats` (see Alg 1 and Appendix C), and a different model is trained for each dimension of $y$. The complexity for sampling is the same as evaluating the trees `n_sampling_steps * y_dim` times.
**W.3 The comparison to other methods (Section 5.2) could be improved. For example, iBUG's hyperparameter [...] which may improve its performance on datasets in which the output is not Gaussian.**
Thank you for your suggestion. Following your advice, we
1. increased the search space of $k$ using the method for optimal tuning of $k$ (Algorithm 3 in iBUG's paper)
2. Implemented iBUG with KDE likelihood -- by tuning the kde bandwidth in [0.0001, 1000].
The results are available [here](https://imgur.com/a/D1x4k9S).
The conclusions are unchanged in both cases: Treeffuser outperforms iBUG (with and without KDE). Thank you for suggesting these great baselines; we have updated section 5.2 of our paper with them. We hope that this addresses your concerns.
## Questions
**Q1. Do the authors have negative log likelihood (NLL) results?**
We thank the reviewer for this question. We do not provide NLL results. While NLL can be approximated using KDEs on the samples outputted by Treeffuser, we found that KDEs are sensitive to the choice of bandwidth. We also note that exact likelihood computations are possible for diffusion models when the score estimator is continuous everywhere, as with neural-based diffusions (see section D.2 [8]). However, this is not applicable for GBT, because trees are not continuous.
Fortunately, this apparent limitation has no practical implications:
- For evaluation, we use CRPS, which can be evaluated from samples. CRPS is a proper scoring rule [16] with advantages over NLL, such as better handling of tail probabilities [17]. CRPS is commonly used for probabilistic predictions [16,18].
- For concrete downstream applications, we argue that being able to generate samples from $p(y\mid x)$ is more important than raw values of $\log p(y\mid x)$.
**Q2. I'm surprised Treeffuser performs better than iBUG for point predictions [...] Do the authors have a hypothesis as to why Treeffuser performs better than XGBoost in terms of point performance?**
Thank you for your question. We also anticipated that iBug would excel in point prediction since it relies on XGBoost. However, we are not surprised by the results.
1. First, we nuance that Treeffuser only slightly outperforms iBug for point prediction, with iBug often very close behind.
2. Second, we point out that iBug's point predictions do not mathematically coincide with XGBoost's. Indeed, the point predictions of a probabilistic method are computed as the expected mean: the empirical average of 50 samples from the modeled conditional probability distribution $p(y\mid x)$. So if the conditional probability returned by iBUG has errors, then the point prediction might also have errors.
In light of your comment, we conducted additional experiments for point predictions against XGBoost and LightGBM directly (with hyperparameter tuning optimization).
The results are shown in [this table](https://imgur.com/a/4gNpSWd), where we also updated iBUG's point predictions with your suggested hyper-parameter tuning improvements which yielded similar but slightly improved results.
As expected, vanilla XGBoost and LightGBM outperform or tie with all the probabilistic methods. In particular, they are often comparable with Treeffuser, suggesting that Treeffuser provides probabilistic prediction without sacrificing point predictions of the mean.
We updated Table 4 with these results and added further discussion in the paper for clarity. Thank you for raising this point and suggesting the addition of more baselines; they strengthened the paper!
---
Rebuttal 2:
Title: Thank you
Comment: I thank the authors for their thorough response, and appreciate the additional experimental results. Overall, my concerns have been largely addressed and I have updated my score accordingly. | Summary: The authors propose a methodology for computing probabilistic predictions in regression problems using diffusion and gradient-boosted trees. In particular, their methodology can generate samples from p(y | x) from which statistics such as estimated quantiles could be recovered. They find that their method can learn flexible and non-standard distributions well and also works well on real-world datasets.
Strengths: The method results in a tractable standard supervised-ML problem, where the dataset is an augmented version of the original dataset.
The method produces good results on a range of datasets.
The paper goes into detail on the relevant diffusion equations governing the process.
I liked the clever application with the newsvendor problem.
Weaknesses: The work is not clear about its contribution and what parts of the diffusion-based setup are new vs borrowed from existing works.
There could be more and better empirical comparisons.
Notation and clarity could be improved.
* Line 129 defines a vector a but that is not used in equation (11).
* Not obvious how eqn (11) simplifies to eqn (12).
The proposed method may not be easily used or widely applicable because of the complex process needed to generate samples involving solving differential equations.
Technical Quality: 3
Clarity: 3
Questions for Authors: I'm left with some basic questions about the method. What part of the contribution is due to the specific way the diffusion process was modeled and what part is due to the use of trees? How does the method differ from other diffusion-based approaches to probabilistic prediction such as the cited CARD paper? What were the key breakthroughs (if there are differences) in how you set up the diffusion problem?
Tagasovska and Lopez-Paz's Single-Model Uncertainties for Deep Learning (NeurIPS 2019) is a relevant work to compare to here. If I understand right it's essentially the same as your quantile regression comparison except you use trees instead of neural networks to learn f(x, q). How would the method perform with neural networks?
How easy is the SDE solver to stitch with the outputs of the trees?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: # Rebuttal Reviewer 3Apn
We thank Reviewer 3Apn for their valuable feedback. We are pleased to hear that they appreciate our application to the newsvendor problem. Based on their comments and questions, we clarify our contributions, the applicability of our method, and its implementation below.
## Weaknesses
**W1. The work is not clear about its contribution and what parts of the diffusion-based setup are new vs borrowed from existing works.**
Please see our answer to Q1.
**W2. There could be more and better empirical comparisons.**
Thank you for the comment. We selected five strong approaches that practitioners might choose for probablisitc predictions. However, we look forward to comparing the performance of Treeffuser (our method) against other methods.
What are some specific datasets or methods you think would be valuable for comparison in our paper?
**W3. Notation and clarity could be improved**
Thank you for the feedback.
- The vector **a** is a dummy vector to explain notations, it is normal to not find it in the equation. We rephrased the sentence to avoid confusion.
- The derivation of Eq. 12 was shown in Appendix Eqs. 19 to 22. We updated the paper to make sure this is also clear in the main text.
**W4. The proposed method may not be easily used or widely applicable because of the complex process needed to generate samples involving solving differential equations.**
Thank you for raising this point, usability is very important to us. But we are surprised by this comment, because Treeffuser is actually _straightforward to use and widely applicable_. This is a key contribution of our work. We argue this is the case for the following reasons:
- Concretely, we designed a simplified user experience by wrapping Treeffuser in a plug-and-play Python package that solves the differential equations for the user. With the accompanying package in the supplementary material, using Treeffuser is as simple as:
```
model = Treeffuser()
model.fit(X,y)
samples = model.sample()
```
- Offering such a simple user interface is only possible because we built Treeffuser with a flexible estimator (trees) and flexible model (diffusions). As a result, this simple training procedure is the same regardless of the type of data (e.g. heavy tailed, multi-modal, heteroskedastic, categorical, missing).
- In practice, Treeffuser is accurate on diverse datasets, even without tuning (see our studies in Table 1, 2, and 4).
To reiterate, while solving differential equations is indeed complex, it is not more complex than fitting boosted trees or optimizing thousands of neural network parameters. Efficient packages such as `xgboost`, `sklearn`, and `torch` made these tasks accessible to any user. With our `treeffuser` companion package, we aim to make diffusion-based probabilistic predictions for tabular data equally accessible.
## Questions
**Q1.1 What part of the contribution is due to the specific way the diffusion process was modeled and what part is due to the use of trees?**
Thank you for the question. We think that both parts cannot really be separated. The main contribution and goal is to use trees, but we had to adapt the diffusion model process to do so. (For example, trees only have a 1d output.)
Yet, you are right that the way we modeled the diffusion is important. For example, the score reparametrization on line 133 (and Eq. 13) is crucial for performance.
To highlight the importance of our design choices, we added this discussion to the main text, and included an ablation study on the score reparametrization in the Appendix.
**Q1.2 How does the method differ from other diffusion-based approaches to probabilistic prediction such as the cited CARD paper?**
The differences between CARD[9] and Treeffuser are:
1. CARD uses a specialized neural network architecture, while we use out-of-the-box gradient-boosted trees (e.g. XGBoost, LightGBM).
2. CARD uses a discrete time approach [13-14], while we formulate our conditional diffusion problem with continuous SDEs [8], which we believe renders the theory easier to follow.
3. Our approach requires much fewer function evaluations (i.e. 50 vs 1000) to generate a sample.
4. CARD requires two training steps: a first model is trained to predict the mean of the target only, then a second model learns the distribution of the data using a diffusion that takes the prediction from the first model as input. Treeffuser directly trains the diffusion model on the target. It doesn't require training any auxiliary model for conditioning and is thus simpler.
5. We attempted to use CARD in our experiments for comparison but were unable to obtain good results.
**Q2. Tagasovska and Lopez-Paz's Single-Model Uncertainties for Deep Learning (NeurIPS 2019) is a relevant work to compare to here. [...] How would the method perform with neural networks?**
Thank you for sharing this reference. We were not aware of it but now include it in the text. Indeed you are correct, the baseline quantile regression used in our paper implements the same idea using trees instead of neural networks.
In fact, we originally tried a similar version of quantile regression with neural networks, and we found that on all our experiments, the tree based quantile regression performed better. This is in line with the observed dominance of GBTs over neural networks on tabular data [10-12].
**Q3 How easy is the SDE solver to stitch with the outputs of the trees?**
Integrating the SDE solver with the outputs of the GBT is easy, akin to neural network based diffusions. Concretely, the trees are wrapped into a class with a `predict` function that returns the learned score, which is then fed to the SDE solver.
We ensured our code is modular to support any SDE solver, while hiding all the SDE complexity from the users. For example, `model.sample(n_samples)` calls the standard Euler-Maruyama SDE solver that handles the numerical integration automatically.
---
Rebuttal Comment 1.1:
Comment: Thank you for your detailed response! I appreciate the authors highlighting ease-of-use of the method as well as additional thoughts about neural networks and GBTs.
I'm still a bit unsettled on my question about novelty. My read of the method is that there are two distinct parts of the approach, (1) how to augment the original train set; and (2) what model to fit on the new train set. Presumably one could swap neural networks in for (2), with some impact on quality (perhaps a decrease on simple tabular data but an improvement on complex inputs). That could even be a comparison in this paper to help differentiate between the tree-part of the contribution and the diffusion setup part of the contribution.
It's clear to me now that CARD is a different approach to solving the diffusion problem but does that mean the fundamental approach used in this paper (which again, would seemingly work for any model structure) novel? The paper does not clearly say, nor do the authors, which makes me wonder if it is using an existing approach. They say they have adapted diffusion models for trees but without much specificity about what is novel about this adaptation (many models are single-output, not just trees).
I will leave my score constant, with the proviso that if the ACs and other reviewers believe there is novelty in the overall approach to solving the diffusion problem (and not just to using trees as step (2) of the solve instead of other model classes), I would support the paper more strongly.
---
Reply to Comment 1.1.1:
Comment: Thank you for your reply. We believe there might be some misunderstanding regarding the novelty and potential impact of our paper. We would like to clarify it.
The review appears to overlook the significance of using trees, implying they could be
replaced with other models, like neural networks. However, for probabilistic predictions on
tabular data, we believe that the use of trees alone has substantial relevance, and that it is
a meaningful contribution to the community for the following reasons:
- *Trees are fast and easy to train.* Neural networks are slow to train and often require specialized
hardware, especially within diffusion models. As mentioned in Section 1, Treeffuser trains from a table with 112,000
observations and 27 variables in 53 seconds on a laptop.
- *Trees make learning robust.* Neural networks are sensitive to the choice of architecture
and training hyperparameters. Treeffuser outperforms existing methods across diverse
datasets even without tuning.
- *Trees are accurate on tabular data.* Trees have been shown to outperform neural
networks on tabular data, including complex tabular data.
We clarify that when we say that we adapt diffusions to gradient-boosted trees, we mean that we show concretely how to set up the learning objective (Theorem 2) and demonstrate empirically that this approach works. However, we do not claim to introduce a new diffusion framework. As discussed, we use the continuous-time formulation from Song et al. (2021), originally conceived for the unconditional case $p(x)$, and adapt it to our conditional setting $p(y|x)$ (Theorem 1).
To conclude, we would like to emphasize once again that the use of trees is a central aspect
of our contribution, and overlooking this might miss the essence of our work. | Summary: This paper proposed a method called "Treeffuser" for probabilistic prediction (density regression) of tabular data.
Strengths: The paper is well-written and clearly presented the core idea as well as the main results.
Weaknesses: There should be more related methods to compare with in the experiment in section 5.3, especially more recent ones.
Technical Quality: 3
Clarity: 4
Questions for Authors: What are the potential applications of the proposed method, especialy given that (1) it's designed for tabular data, and (2) it is generative but cannot provide density evaluation?
Confidence: 4
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: My main concern about this proposed method is that it cannot provide exact density evaluation. This is also discussed in the limitations section by the authors.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: # Rebuttal reviewer iWzb
We thank reviewer iWzb for their comments. We appreciate that the reviewer found our paper to be well-written and clearly presented. We appreciate the great questions and suggestions by the reviewer. We provide answers and discussions below.
## Weaknesses
**W1. There should be more related methods to compare with in the experiment in section 5.3, especially more recent ones.**
We thank the reviewer for the suggestion. We designed Section 5.3 to be a concrete example showing how simple it is to use Treeffuser for real-world applications. This section was not intended to be an extensive comparison like Section 5.2.
We selected the methods in Section 5.3 because they performed the best in Table 2, including the recent iBUG [1]. They also represent two types of methods with interesting properties for comparison with Treeffuser: a “hand-crafted” log-likelihood specially suited for the problem (NGBoost Poisson), and a “likelihood-free” tree-based model (amortized quantile regression). We omitted other methods in Figures 2 and 3 to enhance the figures’ legibility.
## Questions
**Q1. What are the potential applications of the proposed method, especialy given that (1) it's designed for tabular data, and (2) it is generative but cannot provide density evaluation?**
Thank you for your question:
1. Methods for predictions from tabular data are one of the most frequent uses of machine learning in practical industrial applications (e.g. sklearn [2], or methods such as xgboost [3]). Uncertainty estimation in the form of modeling $p(y \mid x)$ is equally important in those applications and for risk-aware decision making [4-7].
2. We do not focus on providing density evaluation because it is unnecessary for our purposes. In practice, we aim to model $p(y \mid x)$ to make real-world decisions based on quantities such as $\mathbb{E}[f(y, x) \mid x]$. These quantities are purely evaluated from samples of $y \mid x$. For example, a user might be interested in the expected output of their factory given specific settings, the standard deviation, the expected profit, or the probability that profit falls below a certain threshold $c$, where their profit function $f(y, x)$ is a black-box function. With Treeffuser, we can model and compute quantities by using the samples to compute $\mathbb{E}[y \mid x]$, $\mathbb{E}[y^2 \mid x] - \mathbb{E}[y \mid x]^2$, $\mathbb{E}[f(y, x) \mid x]$, and $\mathbb{E}[\mathbb{1}_{f(y, x) \leq c} \mid x]$, respectively. In contrast, with density estimation $p(y \mid x)$, evaluating $\mathbb{E}[f(y, x) \mid x]$ or even $\mathbb{E}[y \mid x]$ would likely require advanced techniques to sample from $p(y \mid x)$ and use Monte Carlo estimators to compute these quantities. Treeffuser performs these computations directly.
To re-emphasize, while having an exact density evaluation could be useful in some scenarios, we believe having sampling is as important if not more important. In fact, the ability to sample from a density is an active research area (inverse transform sampling, rejection sampling), and Treeffuser enables sampling directly without modeling the density.
We have now expanded the paper’s introduction with the discussion above to emphasize that the focus of the paper is sampling from $p(y \mid x)$ instead of approximating the value $p(y\mid x)$. Thank you for helping us clarify any confusion about the goal and impact of Treeffuser.
## Limitations
**L1. My main concern about this proposed method is that it cannot provide exact density evaluation. This is also discussed in the limitations section by the authors.**
As detailed above, we did not aim to do exact density evaluation because the downstream applications we are focusing on do not need it. We focused on designing a method that provides high-quality samples, which is more important in practice. We believe that focusing on sampling is a strength, not a weakness.
---
Rebuttal Comment 1.1:
Comment: Thanks for the rebuttal. I keep my original score of weak accept. | null | null | Rebuttal 1:
Rebuttal: # General rebuttal
We are grateful to the reviewers for their great feedback! Here, we summarize the key points from our rebuttals.
## Contributions
We clarify that our main contribution is to successfuly combine diffusion models and gradient-boosted trees (GBT), and use them to provide state-of-the-art probabilistic predictions on tabular data. Our approach combines the robustness and accuracy of GBT with the nonparametric flexibility of diffusion models.
Treeffuser is straightforward to use and widely applicable, as it comes with a plug-and-play Python package and requires little to no tuning.
Using Treeffuser with default parameters requires three lines of code:
```
model = Treeffuser()
model.fit(X, y)
samples = model.sample(n_samples)
```
Thanks to the use of diffusions and trees, this approach works with both multi-dimensional and uni-dimensional responses, regardless of whether $p(y \mid x)$ is multi-modal, heteroskedastic, heavy-tailed, or follows a simple normal distribution.
## Empirical experiments
We incorporated the reviewers feedback to expand our empirical experiments and strenghten our paper. More specifically,
- We implemented a KDE-based version of IBUG and extended its hyperparameter search space. This improved the method’s performance, but Treeffuser still outperforms IBUG across most datasets.
- We added vanilla XGBoost and LightGBM to the point prediction benchmark tables. As expected, they outperform or tie with all the probabilistic methods. In particular, they often tie with Treeffuser, suggesting that Treeffuser provides probabilistic prediction without sacrificing average point predictions.
- We added an ablation study of Treeffuser where we removed the standard deviation reparametrization (line 133 and Eq. 13) and show that this design choice was important.
- We conducted an empirical runtime analysis of the training and sampling processes. The results show that training is fast (less than 2 minutes for the datasets in the paper) and scales linearly with dataset size; sampling is also efficient ($\approx 10^{-4}$ seconds per sample) but slow for large sample sizes, yet it can be scaled with parallelization.
## Other feedback
We clarify that our focus is on generating samples rather than approximating the density $p(y|x)$. Practical applications require quantities such as expectations, standard deviations, quantiles, among others; while these quantities are hard to compute with densities, they can be readily computed with samples.
We also improved the clarity of our notation and derivations in our method section.
## References
These are the references cited in our rebuttals.
[1] Jonathan Brophy, & Daniel Lowd. (2022). Instance-Based Uncertainty Estimation for Gradient-Boosted Regression Trees.
[2] Pedregosa, F., Varoquaux, G., Gramfort, A., Michel, V., Thirion, B., Grisel, O., Blondel, M., Prettenhofer, P., Weiss, R., Dubourg, V., Vanderplas, J., Passos, A., Cournapeau, D., Brucher, M., Perrot, M., & Duchesnay, E. (2011). Scikit-learn: Machine Learning in Python. Journal of Machine Learning Research, 12, 2825–2830.
[3] Chen, T., & Guestrin, C. (2016). XGBoost: A Scalable Tree Boosting System. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. ACM.
[4] Gneiting, T., & Katzfuss, M. (2014). Probabilistic forecasting. Annual Review of Statistics and Its Application, 1(1), 125-151.
[5] Begoli, E., Bhattacharya, T., & Kusnezov, D. (2019). The need for uncertainty quantification in machine-assisted medical decision making. Nature Machine Intelligence, 1(1), 20-23.
[6] Jillian M. Clements, Di Xu, Nooshin Yousefi, & Dmitry Efimov. (2020). Sequential Deep Learning for Credit Risk Monitoring with Tabular Financial Data.
[7] Taylor, J. W., & Taylor, K. S. (2023). Combining probabilistic forecasts of COVID-19 mortality in the United States. European Journal of Operational Research, 304(1), 25–41.
[8] Yang Song, Jascha Sohl-Dickstein, Diederik P. Kingma, Abhishek Kumar, Stefano Ermon, & Ben Poole (2020). Score-Based Generative Modeling through Stochastic Differential Equations. CoRR, abs/2011.13456.
[9] Xizewen Han, Huangjie Zheng, & Mingyuan Zhou. (2022). CARD: Classification and Regression Diffusion Models.
[10] Borisov, V., Leemann, T., Seßler, K., Haug, J., Pawelczyk, M., & Kasneci, G. (2024). Deep Neural Networks and Tabular Data: A Survey. IEEE Transactions on Neural Networks and Learning Systems, 35(6), 7499–7519.
[11] Léo Grinsztajn, Edouard Oyallon, & Gaël Varoquaux. (2022). Why do tree-based models still outperform deep learning on tabular data?.
[12] Yury Gorishniy, Ivan Rubachev, Valentin Khrulkov, & Artem Babenko. (2023). Revisiting Deep Learning Models for Tabular Data.
[13] Jonathan Ho, Ajay Jain, & Pieter Abbeel. (2020). Denoising Diffusion Probabilistic Models.
[14] Jascha Sohl-Dickstein, Eric A. Weiss, Niru Maheswaranathan, & Surya Ganguli. (2015). Deep Unsupervised Learning using Nonequilibrium Thermodynamics.
[15] Ke, G., Meng, Q., Finley, T., Wang, T., Chen, W., Ma, W., Ye, Q., & Liu, T.Y. (2017). LightGBM: A Highly Efficient Gradient Boosting Decision Tree. In Advances in Neural Information Processing Systems. Curran Associates, Inc..
[16] Gneiting, T., & Raftery, A. E. (2007). Strictly proper scoring rules, prediction, and estimation. Journal of the American Statistical Association, 102(477), 359–378.
[17] Blicher Bjerregård, M., Kloppenborg Møller, J., & Madsen, H. (2021). An introduction to multivariate probabilistic forecast evaluation. Energy and AI, 4, 100058.
[18] Tony Duan, Anand Avati, Daisy Yi Ding, Khanh K. Thai, Sanjay Basu, Andrew Y. Ng, & Alejandro Schuler. (2020). NGBoost: Natural Gradient Boosting for Probabilistic Prediction. | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Instance-Specific Asymmetric Sensitivity in Differential Privacy | Accept (poster) | Summary: In this paper, authors propose a new instance specific sensitivity based method for differentially private queries. In differentially private literature, sensitivity of the query (or the function of interest) is an important quantity, which affects the amount of noise we need to add in order to guarantee DP. In majority of DP mechanisms, the sensitivity is considered as the maximum change of the query when a single data point is changed in the data. Scaling the noise level with this global sensitivity might lead to large amount of noise added which then negatively affects the utility of the method. Instead of using this global sensitivity, another line of work is to use so called smooth sensitivity which calibrates the noise based on the data set at hand, leading to lower level of perturbation and better utility. In this paper, authors extend this line of work with their new sensitivity definition called *reflective inverse sensitivity* and the corresponding release method *asymmetric sensitivity mechanism* which is based on earlier AboveThreshold mechanism. Authors prove privacy and utility guarantees for the proposed method, and empirically demonstrate its benefits over the earlier smooth sensitivity based methods providing up to a magnitude difference in the accuracy.
Strengths: The sensitivity of a function is typically hard to bound without some strong prior knowledge on the private data set. Hence majority of the DP works resort to some ad-hoc bounds, e.g. through clipping of the function or the data, to limit the sensitivity of the function. This can create bias to the output, and analyzing the bias before the application is difficult. Hence the proposed method, which does not suffer from this limitation, is interesting. While the proposed solution builds on top of existing literature, the novelty of this works contribution clearly separates them from prior literature.
The theoretical analysis seems sound to me, and the utility result is an important finding. However, I think the exposition of this result could be further improved and will come back to this in the weaknesses.
I also believe that the proposed mechanism has broad application potential. For example tuning classifiers trained with public data based on their accuracy on the private data. The experiments in such tasks demonstrate significant benefits over the prior works.
Weaknesses: In general the paper reads well, but it took me a while to parse the main story together. I would suggest authors to simplify some parts of the introduction to make the paper easier to approach. For example, I was rather confused by the use of "inverse of a function" in the introduction, since I don't think it is clear at this point what kind of functions the paper talks about. Therefore it was unclear to me what would be the argument of the function to take the inverse from. This becomes very clear later in the paper, but perhaps some motivating example. to introduce reader to the topic of inverse sensitivity, would be beneficial.
I also think the utility result, Lemma 4.1, could be better discussed. There is some further discussion about the result in the appendix, but I would suggest moving some of that to the main paper. It would be very important to at least discuss, what is the key take-away of the utility guarantee. Otherwise it is somewhat hard to appreciate it.
Technical Quality: 3
Clarity: 3
Questions for Authors: - Can you clarify what is the motivation for subsampling 1000 samples from each data set in your variance experiments? Is the full data variance task too easy to show any difference between the methods? Or does the O(n) runtime become a bottleneck for running the experiments on the full data?
- typos
* "... in it’s full generality ..."
* in the end of page 6.: "even if $U_1^k(x) = \infty$", I guess it should be $U_f^1(x)$?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: I believe the limitations are properly discussed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their thorough review of our work and incredibly helpful feedback. In retrospect, we are disappointed with our decision to relegate much of the intuition to the appendix in deference to including more of the concrete results and will be sure to better incorporate aspects of this discussion into the main body.
$\textbf{Questions:}$
The decision to subsample 1000 samples was entirely following from previous DP papers we had read that did this method for their empirical studies. Their justification was that this bootstrapping better simulates new draws from the underlying distribution of the original data. As a result, it better ensures that the proposed method does not happen to only perform well on that specific dataset, but performs well generally on datasets drawn from the underlying distribution. The specific amount of 1000 was likely just a result of being a "clean" number (power of 10) and less than the total data size for most datasets. There are no computational difficulties with using larger datasets (within reason).
Thanks for catching the typos! The reviewer is absolutely right in their correction of the second typo (our apologies for the confusion).
---
Rebuttal Comment 1.1:
Comment: Thank you for your response! I'm happy to keep my score as is. | Summary: This paper proposes a new instance specific asymmetric sensitivity under differential privacy by building on the well-studied inverse sensitivity mechanism which adapts to the hardness of the local data according to the inverse closeness to the underlying ground dataset. It also develops some theoretical guarantee and performs empirical validation.
Strengths: The paper deals with an important question in differential privacy.
Weaknesses: (1) The paper is not well-written and is very hard to follow. The abstract and introduction mentions the notion of asymmetric sensitivity without any intuitive explanation. Due to the lack of this important intuition, it is very difficult to understand well the contributions of this paper although the authors list their works in Section 1.3.
(2)The important definition of asymmetric sensitivity mechanism (Line 187) is very unclear. The benefit of asymmetrcity compared with the (symmetric) inverse sensitivity mechanisms in (Asi and Duchi 2020.
(3)The paper fails to provide the motivations and explanations for theoretical results in a clear and convincing way. The authors consider many concepts and notions but don’t provide their clear logical connection to the main topic of the paper.
(4)The citations usually appear in the main text not in the footnote (like in Page 7).
Technical Quality: 2
Clarity: 2
Questions for Authors: (1)Line 208: Why the computation time is O(log n?
(2)In Definition 3.1, why 1/2? Can we replace it with 0 or 1?
(3)Could you please rewrite the definition of asymmetric sensitivity mechanisms in the form of sparse vector technique or in the form of DP mechanisms? What is the benefit of asymmeticity? Could you provide a concrete example where the sensitivity is asymmetric and explain how your techniques can be applied to it?
(5)Use the above example to illustrate the variance-bias tradeoff.
(6)What is the advantage of asymmetric sensitivity mechanisms in this paper over inverse sensitivity mechanisms in the literature?
Confidence: 4
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We know that the reviewing burden can be significant and are very sorry to have added to this burden by not presenting our results more clearly. It's clear that we attempted to fit in too many of our results in the main body and neglected better explaining the intuition of our methodology. In particular, we relegated our main intuition section to the appendix (Section C) and are very disappointed in ourselves for this decision in retrospect.
$\textbf{Questions:}$
We greatly apologize for not better clarifying these questions the reviewer raises. We attempt to do so $\textit{concisely}$ here, but are more than happy to give further detail.
$\textit{Question 1 response.}$ The array $L^n_f(x),...,L^1_f(x), f(x), U^1_f(x),...,U^n_f(x)$ is ordered due to construction of Definition 3.4, and so inserting $t$ into this ordered array takes $O(\log n)$ time. Where $t$ lands in this array then immediately implies the value of $len(x;t)$ by Lemma 3.5. For example, if $L^i_f(x) < t < L^{i-1}_f(x)$ then $len(x;t) = i$.
$\textit{Question 2 response.}$ This is best understood through Figure 5 where the reflective inverse sensitivity is a step-function and each step must be at most distance 1 apart to give the privacy guarantees (Lemma A.5). Subtracting 1/2 ensures this desired property (we're happy to give further detail).
$\textit{Question 3 (a) response.}$ We apologize for not clarifying, but sparse vector technique essentially just iteratively calls AboveThreshold (see section 3.6 of The Algorithmic Foundations of Differential Privacy). Given that we only need one call, we directly called AboveThreshold (which is in our preliminaries), but our mechanism is then essentially already written in the form of a simple call to sparse vector technique. Intuitively, if we consider Figure 5 again, our mechanism considers outputs in increasing order and tries to identify when the reflective inverse sensitivity crosses from negative to positive, ie exceeds the threshold of 0 which occurs at f(x), by applying sparse vector technique.
$\textit{Question 3 (b) response.}$ We view the benefit in terms of the relative advantage of our method (answered in question 6)
$\textit{Question 3 (c) response.}$ To be clear, our technique can be generally applied and is just more effective than other methods when the sensitivities are asymmetric. An example of asymmetric sensitivities is for variance. If we change one individual's data point the amount the variance decreases is bounded, but the amount it increases can be infinite (section D.1 has full details).
$\textit{Question 6 response.}$ The benefit of our method comes when the sensitivities are asymmetric (changing an individual's data increases the function more than decreasing it). To best understand this we plot the PDF of both for perfectly symmetric sensitivities in Figure 6. As you can see, our method biases the output to be slightly below the true value. We then plot the PDF with asymmetric sensitivities in Figure 7 and see that our method has more probability mass around the true value in this setting. This intuition is discussed further in Section C and validated empirically with our instantiations in section 5 and 6, and theoretically shown in section 4.
$\textit{Question 5 response.}$ Our method is biased towards selecting outputs below the true value (discussed in question 6) and so the sensitivity of decreasing the function has more impact than the sensitivity of increasing the function on the variance of our estimate. Connecting to example in 3(c), for the unbiased estimators, smooth sensitivity and inverse sensitivity, the noise added is infinite (so infinite variance) because the local sensitivity is infinite. In contrast, adding some bias allows us to have variance that is proportional to the sensitivity of decreasing the function (which is bounded) and we still achieve high utility (for example, see Figure 3)
---
Rebuttal Comment 1.1:
Title: I will rasie my evaluation
Comment: Thank you for your rebuttal. I am satisfied with the answers. I will raise my evaluation to 6. | Summary: The paper introduces a new notion of instance-dependent sensitivity, asymmetric sensitivity, to release general function queries. Compared with inverse sensitivity, it can better capture the underlying instance’s asymmetry, i.e. when changing a data point causing the function value to increase or decrease at different magnitudes. The authors also design an implementation framework for asymmetric sensitivity based private query release using the sparse vector mechanism. Proofs of privacy and utility guarantees as well as empirical evaluation are provided to support the advantage of this algorithm.
Strengths: * The proposed asymmetric sensitivity and the query-releasing framework built upon it are novel and will benefit future research.
* The paper is well-written, providing clear intuition of the methods, theoretical guarantees, and empirical evaluation.
Weaknesses: As the author noted in the paper (lines 178-179), the proposed method may not yield significant improvements when applied to vector-valued queries due to the structure of high-dimensional space.
typo: L^{\log k}} instead of L_f^{\log k}} in Lemma 4.1 and its proof.
Technical Quality: 3
Clarity: 3
Questions for Authors: From empirical evaluation (Figures 1, 2, and 3), the improvement of the asymmetric sensitivity mechanism upon the inverse sensitivity mechanism seems to be higher in the high privacy regime than in the low privacy regime. Can the authors comment on this?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors provide this part in the checklist
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their thorough review of our work, helpful feedback, and catching an important typo.
$\textbf{Question:}$
This is a phenomenal question and a great catch by the reviewer. To answer it really requires digging deep into the intuition upon why our method performs better with asymmetric sensitivities. A brief summary is that:
1. Greater asymmetry of the underlying instance correlates to better relative performance of our method (see Figure 8) and explicitly quantifying asymmetry not only depends on the underlying instance but also the privacy parameter (Section C.3 has full details)
2. For variance and model evaluations, we generally have that the asymmetry of the underlying instance increases for higher privacy parameters.
$\textbf{Longer explanation of 1. (Section C.3 has full details):}$
Recall in line 234 that we informally consider the sensitivities to be asymmetric if $|f(x) - U_f^k(x)| >> |f(x) - L_f^k(x)| $ for most $k$ (or vice versa), which is to say that changing an individual's data can generally increase the function more than decrease it. In order to explicitly quantify this we need to average this magnitude of difference over all $k$. However, this averaging should not be uniform because the mechanisms are more likely to select outputs closer to $f(x)$, i.e. within $[L_f^k(x), L_f^{k-1}(x)]$ or $[U_f^{k-1}(x), U_f^k(x)]$ for smaller $k$ values (this is a slight oversimplification). Essentially, the more likely the mechanism is to select outputs closer to $f(x)$, the more important the magnitude of difference is for small $k$. The privacy parameter directly affects this likelihood, so for higher privacy parameters the magnitude of difference for larger $k$ is more impactful in the quantification of asymmetry. We empirically confirm this intuition in our experiments for Figure 8.
$\textbf{Longer explanation of 2:}$
The summary here is that for variance and model evaluations we generally have that the magnitude of difference between $|f(x) - U_f^k(x)|$ and $|f(x) - L_f^k(x)| $ becomes larger as $k$ increases. As such, for higher privacy parameters the asymmetry of sensitivities increases and our method sees increased relative advantage.
To understand this we'll restrict to the variance instantiation for simplicity and assume the data is Gaussian. Changing one individual's data to minimize variance takes the min or max data point and moves it to the mean, and similarly to maximize the variance takes the data point closest to the mean and moves it to the boundary (note this is a slight oversimplification). This generalizes to changing $k$ individual's data. Due to the bell curve shape of the Gaussian, there are far fewer outliers compared to points close to the mean. As such, the amount we can decrease the variance lessens much more quickly with respect to $k$. This then results in the magnitude of difference between $|f(x) - U_f^k(x)|$ and $|f(x) - L_f^k(x)| $ becoming larger as $k$ increases.
While the real data we use is not necessarily Gaussian, we still generally expect it to be somewhat concentrated as opposed to uniformly spread and the same notion holds. This same idea applies to model evaluation where we expect more errors to be closer to zero as opposed to very large.
---
Rebuttal Comment 1.1:
Title: Official comment by reviewer J6Z2
Comment: Thank you for answering my question! I will keep my score. | Summary: The paper proposes an asymmetric sensitivity mechanism for the private estimation of functions with asymmetric outputs, such as variance. This mechanism combines the inverse sensitivity mechanism with the sparse vector technique to handle asymmetric sensitivities effectively. The proposed method is efficient and demonstrates superior performance in variance estimation and model evaluation tasks (e.g., for regression and classification models) compared to existing methods like inverse sensitivity mechanisms and smooth sensitivity mechanisms.
Strengths: - This paper utilises the asymmetric sensitivity property (e.g. "changing an individual's data can generally increase the function more than decrease it"). This property holds for many problems.
- The proposed method is of practical relevance and has efficient implementations for many applications.
Weaknesses: The theoretical guarantee can be vacuous in some reasonable settings. For example, the upper bound in lemma 4.1 can be loose when $f(x)\ll 1$ for all $x$.
Technical Quality: 3
Clarity: 3
Questions for Authors: Is there a general recipe for efficiently implementing the asymmetric sensitivity mechanism beyond the specific examples provided (i.e., variance estimation and model evaluations)?
Confidence: 2
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: See weaknesses.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their thorough review of our work and their helpful feedback.
$\textbf{Weaknesses:}$
Could the reviewer please explain further why they would classify our bounds as "vacuous" in reasonable settings? We certainly agree with the reviewer that asymptotic bounds can sometimes be loose in practice, but we feel this applies to the previous work as well. For the scenario the reviewer brings up, $f(x) << 1$, our upper bound essentially becomes absolute error (of approximately $\beta^k - 1$) instead of relative error, but this is still non-trivial. Additionally, it is still true in this setting that our bounds are superior to previous work as $U_f^1(x) \rightarrow \infty$.
$\textbf{Question:}$
As was originally pointed out in the smooth sensitivity paper, unfortunately there are functions where computing even just local sensitivity is computationally infeasible, which is required for exact smooth sensitivity, inverse sensitivity, as well as our asymmetric sensitivity. Computing efficient and meaningful approximations of local sensitivity is then still highly dependent upon the function of interest for all of these methods. | null | NeurIPS_2024_submissions_huggingface | 2,024 | Summary: This paper provides a new algorithmic framework for differentially privately computing general functions that adapts to the "local" sensitivity of the underlying dataset. It follows previous work's paradigm which is to sample outcomes with probabilities exponentially decreasing in how "far away" the current dataset is from one that produces that particular outcome. The mechanism works by finding the reflective inverse sensitivity which is slightly different from the conventionally used inverse sensitivity; then it tests a series of monotone outcomes and outputs the first one whose reflective inverse sensitivity passes the threshold of 0. The result is a potentially biased mechanism hence the known performance guarantees for related mechanisms such as inverse sensitivity mechanism do not directly apply. Since the evaluation of the reflective inverse sensitivity can be inefficient, the paper then discusses practical tactics for approximating it. The mechanism is carried out on the private variance problem and the experiments show promising results where the new mechanism beats previous ones with a significant lead.
Strengths: The mechanism used in the paper is original in that the defined variant of inverse sensitivity is asymmetric, and it does improve the performance in the case study of private variance estimation. The paper is organized well, the writing rather clear. The mechanism could be a preferrable alternative to the state-of-the-art, its practical use mostly depending on whether we can easily find approximate functions for the asymmetric inverse sensitivity that are efficient to compute but that can be said of most exponential mechanisms.
Weaknesses: The greatest weakness of this paper is perhaps the lack of justifications of why the proposed mechanism should work better; especially the asymmetric part. What it does is skip the t's smaller than f(x) and only look at bigger ones, hence we are moving the mass probability from one side of the distribution and put it on the other. The paper provides some intuition but I still fail to see why it should help. I am inclined to believe that in the private variance computation it helps because we know variances cannot be negative.
By and large I think it is pointing out a phenomenon noticeable in practice, and I don't doubt that it could be applicable to more cases than have already been observed. Still there is the missing piece of puzzle of why we are observing this and how to extend this knowledge to other related problems. The community could benefit greatly from that missing theoretical analysis and it will complete the story.
Technical Quality: 2
Clarity: 3
Questions for Authors: How does the performance of this algorithm compare to, say, running propose-test-release by testing for a series of fixed number of t's and test $\{len_f(x;t)\}$ by comparing them one by one to some threshold (maybe 1/2) as used here and return the first t that passes this test? It seems close to this mechanism in nature, since mostly it's just aiming at picking the smallest t to get a positive $len_f(X;t)$; further the result is also sort of like an exponential mechanism. Do the authors have any intuition of this? It is perhaps a symmetric version of this.
Confidence: 4
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: The authors have adequately addressed them.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their thorough review of our work and their helpful feedback.
$\textbf{Weaknesses:}$
We greatly appreciate the reviewer's desire for intuition and strongly agree. Unfortunately we decided to put our main intuition section in the appendix (Section C) due to space considerations and are disappointed in our decision in retrospect. It is unreasonable to ask the reviewer to read this section, so we will briefly discuss here how it addresses the reviewer's concerns.
First, the bias in the PDF from our method is actually towards selecting outputs below f(x) and not above as the reviewer seems to suggest (see Figure 6). As a result, when the sensitivities are asymmetric then the bias in the PDF is beneficial for our method (see Figure 7). We explicitly define this asymmetry (Definition C.1) and then empirically test this more generally (Figure 8) to show that higher asymmetry of sensitivity clearly correlates with better relative performance of our method, and this is theoretically supported by our Lemma 4.1
Connecting this to one of our use cases, variance, the reviewer is absolutely correct that non-negativity contributes to improved performance but the root cause is that changing one individual's data will most often increase variance more than decrease it (section D.1 has full details). In particular, if the data is unbounded then one individual can infinitely increase the variance by changing their data. This is also true of our other use cases (section E has full details) and in general we expect this to be the case for functions with inherent lower bounds (like non-negativity as the reviewer suggests) or inherent upper bounds.
We're more than happy to explain further and provide additional pointers if it would help.
$\textbf{Question:}$
This is a great question and is definitely a close variant of our approach. But it does seem that the process the reviewer is suggesting (continual querying until a threshold is exceeded) matches more with SVT / AboveThreshold as opposed to using propose-test-release which only tests one value and releases if the test passes.
This type of method using SVT would work and we agree with the reviewer that it's kind of a symmetric version of our method. But our intuition is then that this would only potentially be more effective with symmetric sensitivities and would also still be worse than inverse sensitivity in that setting. | null | null | null | null | null | null |
Learnability Matters: Active Learning for Video Captioning | Accept (poster) | Summary: This work proposes an active learning algorithm via data learnability and collective outliers. The algorithm takes three complementary aspects, learnability, diversity, and uncertainty, into account. The proposed algorithm leverages off-the-shelf models (e.g., LVLM) for approximations to ground truths.
Strengths: The proposed acquisition function is comprehensive, capturing learnability, diversity, and uncertainty. The first component is captured via 1) the prediction consistency of the scene graph, including objects and attributes. 2) the absolute distance between the SPICE value between the predictions from image-based LVLM and the video captioning model. The second component is captured via tf-idf based on the predictions from BLIP2. The third component is captured by shared objects, attributes, and relationships among the prediction from BLIP2 and the video captioning model
The overall design of the acquisition function is reasonable.
A comprehensive review of data learnability and collective outliers is provided.
This paper conducts extensive experiments on MSVD and MSRVTT datasets. It compares widely used baselines, including maximum entropy, maximum likelihood, core-set selection, and clustered divergence.
The experiment results are promising.
Weaknesses: The usage of LVLM is debatable. If the LVLM is sufficiently powerful to generate accurate captions on videos, training a captioning model via active learning seems unnecessary. If the LVLM cannot generate accurate captions, using the output from LVLM for different components of the acquisition function is unreliable.
The proposed framework requires extensive usage of LVLM and the scene graph, which incurs significant computational costs. On the other hand, many tools, such as Mechanical Turk, are available to collect human annotation cheaply.
I would like to know if the reduction of human annotation costs justifies the extensive computational cost.
The technical innovation for each component appears incremental, and the terms in the acquisition function are heuristic, without mathematical proof.
Technical Quality: 3
Clarity: 3
Questions for Authors: Please refer to the Weaknesses.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors adequately addressed the limitations in the checklist.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your valuable comments. We will provide more details on the motivation behind our design and include additional supportive observations in the final version. Please review our feedback below. We are happy to address any further questions during the discussion period.
> 1. The usage of LVLM is debatable.
+ Currently, LVLMs that support video are still in their infancy. Their zero-shot predictions on complex videos can be unreliable, e.g. performance on VATEX in Tab.10 of mPLUG[1]. Similarly, directly applying BLIP[2] gives an unsatisfactory performance on VATEX. For instance, 37.4 with CIDEr metrics according to [1]. Therefore, instead of directly exploiting predictions LVLM, we exploit LVLM to 1. generate human-like captions at the same granularity so that the abstraction inconsistency can be measured 2. approach granularity inconsistency by comparing predictions from LVLM and $f$(Line 176-187). Overall, while LVLM itself may not be entirely reliable, it can provide valuable information through inter and intra comparisons.
> 2. If the reduction of human annotation costs justifies the extensive computational cost.
+ Our additional computational costs arise from the active learning algorithm, primarily due to applying BLIP2 to the unlabelled dataset and generating scene graphs from its predictions. This process occurs only once on the unlabelled set. On our hardware, consisting of 4 RTX 4090 graphics cards with a power capacity of 2000 kWh, it takes no more than 9 minutes to run BLIP2 and 5 minutes for scene graph generation on the MSVD training dataset. In contrast, according to [3], annotating the full dataset in 2010 required hundreds of annotators, around 2 months, and less than 5000 USD in total. Even if the highest annotation efficiency claimed by the author is always followed (10K captions per day), it still takes about 5 days to finish MSVD without considering the cost of Quality Control. Therefore, we argue that active learning is more efficient in terms of both time and cost.
> 3. The technical innovation for each component appears incremental, and the terms in the acquisition function are heuristic, without mathematical proof.
+ Our approach incorporates learnability, uncertainty, diversity, and a caption-wise active learning protocol. To the best of our knowledge, we are the first to explore collective outliers in video captioning tasks while also formulating them as learnability in an active learning framework. Although uncertainty and diversity are common strategies in active learning, we have adapted them specifically for our video captioning setting. For example, uncertainty measures the consistency between the scene graph generated from the most confident caption by $f$ and that from LVLM (Line 229-233).
References:
[1] Li, Chenliang, et al. "mPLUG: Effective and Efficient Vision-Language Learning by Cross-modal Skip-connections." Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing. 2022.
[2] Li, Junnan, et al. "Blip: Bootstrapping language-image pre-training for unified vision-language understanding and generation." International conference on machine learning. PMLR, 2022.
[3] Chen, David, and William B. Dolan. "Collecting highly parallel data for paraphrase evaluation." Proceedings of the 49th annual meeting of the association for computational linguistics: human language technologies. 2011.
Once again, your time and effort is more than appreciated.
---
Rebuttal 2:
Comment: I thank the authors for preparing the rebuttal. After reading the rebuttal, I intend to keep my original rating, which is positive.
---
Rebuttal Comment 2.1:
Comment: We sincerely thank you for your time and effort as a reviewer for NeurIPS 2024. We appreciate your engagement and the valuable feedback that has helped us improve our paper. We are grateful for your recognition and support of our work. If you have any additional concerns, please feel free to let us know.
Best Regards,
The Authors | Summary: The paper presents a groundbreaking exploration of collective outliers in video captioning tasks, introducing innovative active learning algorithms and an effective caption-wise learning protocol that integrates human knowledge to enhance model performance. Despite its strengths in pioneering research and sophisticated methodologies, the paper falls short in conducting cross-dataset experiments beyond MSVD and MSR-VTT, limiting its real-world applicability. Additionally, a lack of thorough limitation analysis and minor writing issues, such as visibility concerns in figures, suggest areas for improvement in future research efforts.
Strengths: - Pioneering Exploration of Collective Outliers
The paper stands out for being the first to delve into the realm of collective outliers within video captioning tasks, offering a unique perspective on this aspect that has not been extensively explored before.
- Innovative Active Learning Algorithm
The paper introduces a novel active learning algorithm that considers learnability, diversity, and uncertainty, addressing the challenges posed by collective outliers. This algorithm is designed to effectively handle inconsistencies in abstraction and granularity, showcasing a sophisticated approach to improving model performance.
- Effective Caption-Wise Active Learning Protocol
The paper presents a new caption-wise active learning protocol that efficiently integrates human knowledge, demonstrating a strategic way to leverage external input to enhance the learning process. The paper showcases state-of-the-art performances on video captioning datasets using diverse model backbones, indicating the effectiveness of the proposed approaches in significantly improving the quality of generated captions.
Weaknesses: - Lack of cross-dataset experiments. The experiments are only conducted inter MSVD and MSR-VTT. This does not solve the challenges of video captioning in real scenarios. Given a small annotated dataset (like MSR-VTT) and a large unannotated dataset from the web (like WebVid or VATEX), what samples should we annotate from a large web dataset to improve performance on the small annotated dataset?
- Lack of limitation analysis.
- Some writing issues. (1) The fonts in Figure 5 can be hardly seen. Better keep font size in all figures the same.
Technical Quality: 3
Clarity: 3
Questions for Authors: See Weaknesses.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: See Weaknesses.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your encouraging comments. In addition to including the cross-dataset experiments and limitations in our final version, we will double-check the text and redraw some figures to enhance clarity and aesthetics. Please see our responses to your concerns below. More figures are provided in the PDF file. We are happy to address any further questions during the discussion period.
> 1. Lack of cross-dataset experiments.
+ Due to time constraints, we were unable to complete the experiment on large-scale datasets such as VATEX or WebVid. To simulate a cross-dataset setup, we used the small annotated dataset MSVD (1,200 videos with 40 captions per video) and treated MSR-VTT as the large unannotated dataset, which includes 6,513 videos paired with 20 captions each. The results using the official SwinBERT implementation are shown below.
| Method | Data Per. | BLEU_4 | METEOR| ROUGE_L| CIDEr | SPICE |
| :------ | :--------: | :--------: | :--------: | :--------: | :--------: | :--------: |
| Starting Point| 0\% | 55.7119 | 39.6953 | 75.7301 | 109.3901 | 6.9690 |
| Random| 20\% | 62.1464 | 42.5841 | 78.6554 | 123.2620 | 7.6284 |
| Ours | 20\% | 63.6971 | 43.6921 | 79.8555 | 127.4095 | 7.7973 |
| Ours | 5\% | 63.6828 | 43.3397 | 79.7258 | 126.3246 | 7.5737 |
| Ours | +5\% | 63.0961 | 43.6426 | 79.9844 | 130.9551 | 7.8935 |
| Ours | +5\% | 63.5148 | 43.8025 | 79.8097 | 129.6331 | 7.9311 |
| Ours | +5\% | 64.8791 | 44.2499 | 80.4313 | 129.0806 | 7.7748 |
+ We report the overall performance on MSVD test set. "Data Per." is the percentage of human annotations on MSR-VTT. "Random" refers to the random selection baseline described in Lines 268-269 of the main paper. We also report the performance of directly selecting 20% of MSR-VTT (Row 4) and iteratively adding 5% of MSR-VTT four times (Rows 5-8). As shown in the table, our AL algorithm significantly improves the overall performance of MSVD and is a more effective choice compared to random selection. Furthermore, directly selecting 20% of data is slightly less effective than iterative selection, demonstrating the benefits of curriculum learning. Notably, the overall performance peaks at two iterations, or 10% of human annotations on MSR-VTT, according to CIDEr and SPICE. Beyond this point, the performance saturates and slightly declines. This is expected, as 20% of MSR-VTT includes 26K captions and at least 1.3K videos, which is comparable to the original training set of MSVD. Adding more data from a different dataset can degrade performance, as the training may deviate from the original dataset.
> 2. Lack of limitation analysis.
+ There are several limitations that we believe are worth further exploration. Firstly, our paper only briefly touches on the relationship between curriculum learning and learnability. Beyond provoking the design of the learnability term, we believe curriculum learning can enhance the interpretability of learnability terms and even active learning. Secondly, we found that current evaluation metrics, such as CIDEr, do not always align with human evaluations. More human analysis is needed for video captioning tasks. Thirdly, we made some preliminary attempts to combine knowledge from LVLMs in a semi-supervised learning manner (Line 389-394). Although we did not see a significant improvement, we believe further efforts are warranted. Lastly, our experiments with ChatGPT-4 can be improved with more refined designs. We will include these limitations in our final version.
> 3. Some writing issues.
+ We appreciate the reviewer pointing out the writing issues in our manuscript. For Figure 5, we will enlarge the font size and ensure consistent font size across all figures. Additionally, we will double-check the text and redraw some figures to enhance clarity and aesthetics.
We are grateful for your constructive feedback and will revise our manuscript accordingly.
---
Rebuttal Comment 1.1:
Comment: Thank you for your invaluable efforts and constructive feedback on our manuscript.
As the discussion period nears its conclusion, we are eagerly await your thoughts on our responses. We sincerely hope that our response meets your expectations. Should there be any remaining concerns or if further clarification is needed, we are fully prepared to address them as soon as possible.
Best regards,
The Authors
---
Rebuttal Comment 1.2:
Title: Thanks for your response.
Comment: My concerns are well-addressed.
---
Reply to Comment 1.2.1:
Comment: We are deeply indebted to you for the time and energy you devoted as a reviewer for NeurIPS 2024. Your insightful feedback has been instrumental in enhancing our paper, and we are truly thankful for your acknowledgment of our efforts. If you have any further concerns, please don't hesitate to reach out.
Best regards,
The Authors | Summary: This paper works on active learning for video captioning, i.e., filtering the training data for a video captioning dataset. The authors observed significant inconsistency in the captioning annotation, due to different captioning abstraction or granularity. These inconsistency makes the model training hard. The authors then propose an active learning framework that find captions that the model can learn, combining with diversity and uncertainty metrics. The overall model trained with 25% of the annotated captions outperforms models trained with all captions on MSVD and MSRVTT.
Strengths: - The analysis of the video captioning annotation inconsistency in Figure 1 is very interesting. This paper first explored this problem, and utilize it to design an active learning framework. This workflow makes a lot of sense to me.
- Experiments support the claims. The authors compared multiple active learning baselines, and show clear improvements with the proposed model.
- The implementation details are sufficient.
- The paper is well motivated and well structured. Figure 1 and Figure 2 make the method clear.
Weaknesses: - The authors only report relative performance with respect to the fully supervised baseline in the main paper, which "hides" the absolute metrics of the proposed model. Only in the supplementary Table 2, the authors report the absolute metrics on MSRVTT, which seems to be significantly below SOTA on paper with code. E.g., CIDEr of the best performing models are > 70, and the proposed model is at ~55. While I am not saying the paper needs to get SOTA performance, but this gap towards SOTA is concerning. I.e., it might be that more powerful model has less learnability issues, and the proposed model may gain less improvements. This also makes the claim in the abstract "our algorithm outperforms SOTA methods by a large margin" an overclaim.
- From Section 3.3, the authors select X% of the total captioning, rather than X% of the overall videos. Arguably, annotating two captioning for the same video can be more efficient than annotating two videos, due to shared workload in watching the video. This makes the claim of " 25% of human annotations" questionable.
Technical Quality: 3
Clarity: 3
Questions for Authors: Overall this paper studied a very interesting observation in video captioning (inconsistency in human annotation), and propose a valid framework to utilize this observation, with reasonable gains on selected baselines. My main concern is on the low baseline performance, resulting in overclaims on performance. My current rating is a boderline accept, conditioning on the authors response to the baseline issue and willing to rephrase the overclaims.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Not discussed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your constructive comments. We are pleased to clarify our claims and will update our manuscript with additional supportive results. Below are our responses to your concerns. We are committed to addressing any further questions.
> 1. Low baseline performance, resulting in overclaims on performance.
+ We would like to clarify that the "SOTA methods" mentioned in "our algorithm outperforms SOTA methods by a large margin" refers specifically to SOTA active learning methods. We will enhance this clarification in our final version. Regarding the baseline methods, SwinBERT and CoCap were selected as they offer a good balance between accuracy and efficiency (Line 288). In contrast, other SOTA video captioning methods, such as COSA[1], require extensive pre-training. In addition, fine-tuning is also required when adapting to downstream tasks to achieve SOTA video captioning performances. To validate this, we downloaded the pre-trained mPLUG-2[2] model from its official site and applied it to the MSR-VTT test set. Without fine-tuning on the MSR-VTT training set, it achieved a CIDEr score of 24.2. In contrast, mPLUG-2 ranks first on the MSR-VTT leaderboard with a CIDEr score of 80.3 after fine-tuning, demonstrating the significant performance improvement that comes from including data from the target task. Our active learning method remains highly valuable as it focuses on selecting the most informative cases for annotation rather than requiring annotation of all data, thereby leading to a more effective fine-tuning process.
> 2. The claim of "25\% of human annotations" is questionable as "annotating two captioning for the same video can be more efficient"
+ When reconstructing a video captioning dataset, it is common for an annotator to provide just one caption per video. Although this approach is less efficient than having multiple annotations per person per video, it ensures data quality and diversity. For instance, VATEX[3] states that "Each video is paired with 10 English and 10 Chinese diverse captions from 20 individual human annotators." (Introduction). Further details in Section 3.1.1 highlight that "Towards large-scale and diverse human-annotated video descriptions, we build upon Amazon Mechanical Turk(AMT)2 and collect 10 English captions for every video clip in VATEX, where each caption from an individual worker.” Similarly, Section 2 of MSR-VTT[4] notes, "Therefore, we rely on Amazon Mechanical Turk (AMT) workers(1317) to annotate these clips. Each video clip is annotated by multiple workers after being watched. ... As a result, each clip is annotated with 20 sentences by different workers. " Therefore, our claim of "25\% of human annotations" is both practical and meaningful.
References:
[1] Chen, Sihan, et al. "COSA: Concatenated Sample Pretrained Vision-Language Foundation Model." Proceedings of the Twelfth International Conference on Learning Representations, 2024.
[2] Xu, Haiyang, et al. "mPLUG-2: a modularized multi-modal foundation model across text, image and video." Proceedings of the 40th International Conference on Machine Learning. 2023.
[3] Wang, Xin, et al. "Vatex: A large-scale, high-quality multilingual dataset for video-and-language research." Proceedings of the IEEE/CVF international conference on computer vision. 2019.
[4] Xu, Jun, et al. "Msr-vtt: A large video description dataset for bridging video and language." Proceedings of the IEEE conference on computer vision and pattern recognition. 2016.
Once again, we sincerely thank the reviewer for your time and effort in reviewing our manuscript.
---
Rebuttal Comment 1.1:
Title: Thank you for your rebuttal
Comment: Thanks the authors for the rebuttal. Please make sure to add "SOTA **active learning**" in the main claim. While the authors explained why the baseline numbers are lower than the actual SOTA, it is still unclear to me why these particular baselines (SwinBERT and CoCap), rather than a more well-pretrained baseline model, are selected for experiments. I also understand it is not practical to re-do large scale experiments during the rebuttal period.
The authors response on annotation costs make sense.
Overall, my concerns are partially resolved. I'll keep my current borderline rating.
---
Reply to Comment 1.1.1:
Comment: Thank you for your thoughtful feedback and for taking the time to review our rebuttal. Your valuable suggestions have greatly contributed to improving our manuscript. We appreciate your understanding of the limitations during the rebuttal period and welcome any further concerns or questions you may have.
Best Regards,
The Authors | null | null | Rebuttal 1:
Rebuttal: The results of our cross-dataset experiments are provided in the PDF file.
Pdf: /pdf/d9e28b23d83e1cc1c026b41628bb697be8848351.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Linear Regression using Heterogeneous Data Batches | Accept (spotlight) | Summary: This paper addresses the regression problem in scenarios where heterogeneous data is collected from multiple sources, necessitating learning tasks on small batches of samples. The approach involves dividing the data sources into k subgroups with unknown distributions. This study advances the work of Kong et al. (2020) by introducing a gradient-based algorithm, which does not require the restrictive assumptions present in previous works, such as the assumption of isotropic Gaussian distributions for all k subgroups. The properties of the proposed solution are examined through both theoretical and numerical analyses.
Strengths: The general quality of the paper is commendable, and it addresses all pertinent issues effectively. It adequately covers related works and provides a comprehensive perspective on the topic.
The main topic is an important research area with applications in a wide range of real-world projects. Existing works in this area often rely on several assumptions regarding distribution and batch sizes, which can limit their efficiency and accuracy. This paper offers a more general solution that does not depend on restrictive assumptions, thereby enhancing its applicability and robustness.
Weaknesses: The main concerns about the paper include:
It has not been properly discussed how this solution allows the recovery of models for sub-populations with a significant fraction of batches when there are a large number of subgroups. This aspect is mentioned several times but not explained clearly.
In particular, the paper lacks a detailed explanation of how this work can handle large subgroups more effectively than the reference work by Kong et al. (2020).
While it can improve upon previous related works, a detailed complexity analysis is needed. The proposed work is more complex than other works, such as Kong et al. (2020), in terms of both theory and computations. Eliminating the effect of restrictive assumptions can raise the complexity.
Additionally, the experiments are not sufficient for this work. Many aspects should be checked, and only a comparison with one baseline cannot support the claims made in the paper. Several other works in this area can be used for comparison, even if they have been proposed only for specific cases.
In Figures 1 and 2, the differences between the errors of the baselines when k is small are not significant. It would be beneficial for the authors to provide a theoretical explanation for this observation. Understanding why the error rates converge or show minimal variance under these conditions can provide deeper insights into the behavior of the algorithms and the impact of the number of subgroups on their performance.
Technical Quality: 3
Clarity: 3
Questions for Authors: See Weaknesses.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 4
Limitations: The limitations have been partially addressed in the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for many positive comments about our paper. Below we address the remaining comments of the reviewer one by one.
1. Note that even when $k$ is arbitrarily large, the number of batches and batch sizes required in Theorem 2.1 and Corollary 2.2 are reasonable as in that case they depend on $\alpha$ instead of $k$, implying that batches can be recovered even for arbitrary large $k$. We also provide numerical experiments to further confirm this fact (see Figure 4 in the appendix.) Our main observation that helps in such a recovery for large $k$ is that even when $k=\infty$, a small subspace can be estimated that contains gradients for all components with a significant fraction present.
2. As discussed in line 112 of section 1.2, our sample complexity is lower than Kong et al. (2020). Our time complexity $\tilde O(|B_s| d^2/\alpha_m)$ is at most $\tilde O(1/\alpha_m)$ times the complexity of Kong et al. (2020). But as is clear from the value and our numerical experiments show, this complexity is still reasonable in practice.
3. Note that of all previous works, only Kong et al. (2020) take advantage of the batch structure. All other works mentioned in the paper do not, and hence they will naturally fare worse than the baseline we considered. For that reason we did not include them in our results.
4. We note that in Figures 1 and 2, $k$ is kept constant at 16. The small gap between the two algorithms for small values of medium batch size is because if it is too small then it falls below the requirement of Theorem 2.1. In that case, both our algorithm and baselines fail to recover the regression vector.
---
Rebuttal Comment 1.1:
Title: Official Comment
Comment: Thank you for addressing the feedback provided. After reviewing the revisions, I am pleased with the improvements and will be raising my score accordingly. | Summary: The paper considers the problem of linear regression with heterogeneous data batches, where a user receives a collection of batches of data with not necessarily the same underlying signal distribution and they must attempt to infer a list of heterogeneous signals present in this data. This paper proposes a novel gradient-based method to approach this problem that attains provable and favourable guarantees with less assumptions than previous work.
Strengths: Originality and significance: The paper considers a novel way to tackle the problem of linear regression with heterogeneous batches. The work is significant as their algorithm removes a number of assumptions in prior literature, and seems more performant than previous algorithms. The theory provided is solid and sound. The idea is good, and the description of the proof method and the grad clipping idea was good.
Quality and clarity: The problem is well motivated, and the contextualization with respect to prior works is made clear.
Weaknesses: Clarity:
- There are multiple spelling mistakes: a) “liner” above section header 1.1, b) “which follows [a] linear regression model” in the middle of page 2, c) “liner” on the last paragraph of page 3, and many others. Please re-read the paper carefully for spelling mistakes.
- Theorem 2.3: The first sentence does not make sense to me. What does it mean?
More comments:
- Missing reference to recent novel developments in MLR: https://proceedings.mlr.press/v206/tan23a.html. The sample complexity in this work is not super-polynomial, but is indeed linear, and this should be mentioned, despite it still having to satisfy assumptions 2 and 4.
- I think one drawback of the work has not been mentioned when compared to MLR: you output a list of size L, which is potentially much bigger than the number of signals in the original data, and this should be mentioned. This relates to the problem of list-decoding, and it would be good to also mention that literature and how this relates: https://arxiv.org/abs/1905.05679.
- I appreciate that multiple assumptions have been removed, but also importantly one property of the solution has changed from MLR: you are content now with a list of possible signals. From this, it is less surprising that assumptions can be removed, as now the assumptions required and the problem formulation is closer overall to list-decodable linear regression. Could you comment on this?
- The input distributions can be different, but they must be known? Whether yes or no, I would mention this clearly in the contribution.
- I think the work could benefit from a better discussion on what happens to the algorithm when to regression vectors have no separation? How does the algorithm deal with that? Do we expect the size of the outputted list to change?
- A real data experiment would be nice, to evidence this work as that of potential practical significance, but this is not necessary and I still find the work strong and a significant contribution without it.
Technical Quality: 4
Clarity: 2
Questions for Authors: What are the barriers to implementing/trying this on real data?
Confidence: 3
Soundness: 4
Presentation: 2
Contribution: 3
Limitations: The authors have addressed most limitations in the work, I acknowledge the last paragraph of the discussions. Further addressing the comments above would help as well.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the careful review and positive comments. The reviewer’s comments are addressed herein.
1. We thank the reviewer for pointing out the typos, and will correct them along with some others we found.
2. The condition part of Theorem 2.3 can be more clearly stated as follows:
Let index $i$ be in set $I$ and let list $L$ be a list of $d$ dimensional vectors. Suppose for a given $\beta>0$ at least one of the vectors in $L$ is within distance $\beta$ from the regression vector of distribution $\mathcal D_i$, namely $\min_{w\in L} ||w-w_i||\le \beta$.
3. We thank the reviewer for the reference to the recent work on mixed linear regression. It is clearly relevant, and we will include it in our paper.
4. (see next point)
5. Addressing 4 and 5 together, note that, as stated in corollary 2, the $L$ identified by our algorithm is at most $\tilde O(1/\alpha_m)$, even for arbitrary large $k$.
Furthermore, if all $k$ sub-populations follow the assumption in Theorem 2.1 the argument preceding corollary 2.2 can be modified as follows to show that the list $L$ identified by our algorithm can be trimmed down to have at most $k$ elements. In such a case, for any choice of $b^*$ from $B_m$ to run algorithm 1, the regression vector estimates $\hat w$ will satisfy $||\hat w-w_i||\le \epsilon \sigma$ for some $i\in [k]$.
A simple argument using the triangle inequality shows that removing the estimates from $L$ until no two estimates are within distance $2\epsilon\sigma$ from the list $L$ will result in a list $L$ of size $\le k$ such that for all $i$ we have $||w_i-\hat w||\le 3\epsilon \sigma$.
Also, we note that we already included the literature on list decoding. We referred to a recent work of list decodable linear regression using batches [DJKS22] in section 1.2 and other references in the appendix. We can discuss these connections in more detail as per the reviewer’s suggestion.
6. The input distributions need not be known. We will mention it clearly in the contributions following your suggestion.
7. If for two distributions the regression vectors have no separation then when the algorithm is run using a medium-sized batch as $b^*$ from either of two distributions we will be estimating the same regression vector. The upper bound on the size of the output list is independent of the separation.
8. As shown by the synthetic experiments in Figures 1-4, the algorithm is practical even for reasonable-size datasets and dimension $d$. As such there are no barriers to trying it out on real data. We leave it to future work to find interesting and relevant datasets for this setting.
---
Rebuttal Comment 1.1:
Title: Response
Comment: Dear authors,
Thank you for addressing my questions, I am happy with the answers and have increased my rating accordingly. | Summary: This paper explores the concurrent learning of multiple linear models from various small batches of input-output pairs, each derived from a distinct, small-sized source dataset. Although there can be a large number of sources, it is assumed that each dataset belongs to one of several fixed but unknown subgroups, where each subgroup follows a specific input-output rule. These rules may not be linear for all subgroups; however, a significant number of subgroups, representing a substantial portion of the datasets, follow separate linear rules.
The paper provides theoretical guarantees that the linear models for most datasets can be recovered, even with minimal data in each dataset. Moreover, the algorithm's computational complexity is polynomial. The paper claims considerable improvements over existing works, with a detailed comparison with prior research in Section 1.2. The authors have detailed their algorithm, which (similar to some prior works) is based on "subspace estimation," in Section 3. Additionally, the paper includes a series of computational experiments, although I have not thoroughly examined them. I have not fully reviewed the proofs regarding the nature and efficiency of the proposed algorithm.
I would like to see other reviewers' comments. However, my current assessment is that this paper is a good candidate for NeurIPS. I vote for acceptance.
Strengths: The paper is well-written and well-motivated. The problem addressed is natural and aligns with numerous modern applications in machine learning and data science. The review of prior works is thorough, and the authors have effectively positioned their work within the existing literature.
The authors claim significant advancements in their bounds concerning the relationship to certain parameters and dimensions compared to previous works. For example, their bounds require at least $\Omega\left(k^{3/2}\right)$ batches of data, where each batch contains at least $\Omega\left(k^{1/2}/\epsilon^2\right)$ samples. As can be seen, the number of batches is independent of $\epsilon$ (the estimation error). In comparison, some of the most notable prior works require the number of batches to grow significantly with $\epsilon$.
The authors have mitigated several significant constraints present in prior works, including eliminating the assumption of a Gaussian distribution for feature vectors. Additionally, this work removes the assumption that all subgroups must follow a linear model. In fact, the authors extend the concept to accommodate scenarios where the number of subgroups can even approach infinity, ensuring the recovery of the "majority" of subgroups with linear models.
Weaknesses: There are too many hyper-parameters in Algorithm 1, which are provided to the algorithm as input. However, I found the discussion on how to tune these hyper-parameters in practice to be lacking. It would be beneficial to include more detailed guidelines or heuristics for selecting appropriate values for these hyper-parameters, potentially based on empirical studies or theoretical insights. Additionally, some of the assumptions in Section 2.1, such as the distributional properties of the input data and the specific conditions under which the theoretical guarantees hold, may need more clarification. Providing concrete examples or scenarios illustrating these assumptions would help in understanding their practical implications and ensuring they are not overly restrictive.
Technical Quality: 3
Clarity: 4
Questions for Authors: What are the main intuitions behind the assumptions made in Remark 2.1? (L. 196)
Confidence: 4
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: -
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for many positive comments. The reviewer’s main concerns involve the number of parameters and distributional assumptions, both addressed below.
While the number of parameters may seem large, each has a natural and important contribution.
Two accuracy parameters $\epsilon$ and $\delta$ that are part and parcel of any PAC learning algorithm. They specify the accuracy we are looking for and determine the number of samples needed to achieve it.
Two regression parameters: the length $M$ of the regression vector, and the additive noise variance $\sigma$, reflect the complexity of the input-output relation.
Two population parameters: the number $k$ of sub-populations and the fraction of batches $\alpha$ present from the relevant sub-population.
Note that the algorithm combines these parameters to set one parameter $\ell$.
Finally, two distribution parameters: the L4-L2 hypercontractivity parameter $C$ and the condition number of input covariance matrix $C_1$. These parameters reflect the complexity/richness of the distribution class and substantially generalize input distributions considered in the past for mixed linear regression, as they only considered sub-gaussian distributions with the identity covariance matrix.
The L4-L2 hypercontractivity parameter significantly generalizes the sub-gaussian distribution assumption and allows for heavier-tailed distributions.
The condition-number $C_1$ of the input covariance matrices significantly generalizes the identity covariance matrix assumption for input distribution that corresponds to $C_1=1$.
Though the number of parameters seems large, the algorithm needs only an upper bound on their values. If the upper bounds are tight, they will result in a good estimate.
If for any parameter we do not have a tight upper bound, we can run the algorithm for different values of the parameters, dividing their possible range logarithmically, and
choose the parameters that achieve a small error on a holdout set.
We can add this approach in more detail in the final version of the paper and formally define error computation and selection for our setting.
Furthermore, it is easy to see that these upper bounds are needed. For example, even for the simplest case when all samples are from the same sub-population, some condition on the anti-concentration of $XX^T$ as above is needed to obtain finite sample complexities [Lecué and Mendelson, 2016, Oliveira, 2016] in heavy-tailed linear regression, and this corresponds to a low condition number.
The reviewer also asks about the intuition behind Remark 2.1. Our algorithm and its analysis are based on gradient descent, and can handle stochastic noise. Therefore we require only the distribution of gradients to be similar for batches in the same sub-population and since the analysis can tolerate statistical noise it can also tolerate deviation in the distribution of gradients across batches as long as it is small.
---
Rebuttal Comment 1.1:
Title: Response to rebuttal
Comment: I would like to thank the authors for addressing my questions. I have no further concerns and will maintain my score and support for this work. | Summary: The paper presents an algorithm that handles a situation of linear regression from many heterogeneous batches of data, and simultaneously learns all linear separators for which there are large enough batches of data witnessing them. The actual guarantees are more nuanced, covering the common case of a heavy-tailed batch size distribution with completely unknown and potentially difficult-to-distinguish batches.
Strengths: There's a lot to like about the overall approach to the problem. I would compliment it as being "applied theory," in contrast to a lot of prior work which takes a more purely theoretical assumption-laden mindset (as commented upon also by the authors, in describing the many assumptions that they are able to remove). Algorithmic techniques like clipping and median-finding are coupled with appropriate theoretical choices, to achieve a sample complexity that's nearly optimal for these regimes.
The applicability of the problem itself is more than meets the eye; theory can motivate usage of certain algorithmic techniques. In this case, working in the dual gradient domain to do clipping has a very helpful effect, as the authors prove -- and this is currently not standard for the problem.
Weaknesses: Sec. 1.2 and 1.1 cover overlapping material, and in addition they could be switched with each other. I found 1.2 much clearer about the top-level contributions (e.g. removal of assumptions), and 1.1 more of a detailed theoretical comparison. Overall, some rearrangements of the initial section 1 would improve presentation (e.g. "Our work" line 34 unclear reference at that point for the reader). Would suggest moving forward "improvement over prior work" subsection even within Sec. 1.2. It takes 3.5 pages to formally define the setting with batches, continuing the reader's ambiguity somewhat.
For an paper that uses some nice applied algorithmic techniques (see Strengths), it would be nice to run it in a less contrived manner; the R-way partitioning of the data seriously affects applicability (see Questions).
Technical Quality: 4
Clarity: 3
Questions for Authors: There remains a significant gap between the scenario analyzed here and a practical one. The number of GD steps R is a real bottleneck because it requires partitioning the data R ways. Though I understand the authors saying "this division may not be necessary for practical implementation," there are certainly extreme datasets in which it would and would not be necessary. It seems to me like this may be just an analysis consideration, and a more refined analysis might highlight under what stability-type conditions the partitioning would be needed. This is significant; a large R makes it possible to get some of the nice adaptivity properties e.g. to \Delta .
Duchi et al. have work on Ergodic GD which might be of interest for its techniques in this matter, and its parametrizations; doubtless there is more recent follow-up work in this line. Would appreciate comment from the authors on this issue of R.
Confidence: 3
Soundness: 4
Presentation: 3
Contribution: 3
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the positive review, the appreciation of our contribution, and the two suggestions that we discuss next.
The first suggestion concerns rearranging Sections 1.1 and 1.2. Section 1.1 discusses the paper’s main results, and section 1.2 addresses the results’ improvement over prior work. We agree with the reviewer that the comparison in Section 1.2 results in some duplication of results outlined in Section 1.1, and will rearrange some of the material to minimize the repetition.
The second comment concerns R-way partitioning of the data. More concretely, our algorithm runs in R rounds, and our theoretical analysis requires the use of independent samples in each round.
We thank the reviewer for mentioning that Duchi’s work might help eliminate the need for this R-way partitioning in our analysis either entirely or under appropriate constraints.
In searching papers on Ergodic Gradient Descent by Duchi et. al. we found the paper “Ergodic Mirror Descent” that we believe the reviewer referred to. While this paper performs gradient descent using ergodic samples, and hence generalizes the independent samples we use, it still does not eliminate the need of new samples in each round of gradient descent. In the limited rebuttal period, we could not find a way to extend the analysis in the paper to allow using the same samples in each algorithm round, and we leave that for future work.
Note however that to achieve estimation accuracy $\epsilon \sigma$ with noise variance $\le \sigma^2$, and regression vectors of length at most $M$, the number of rounds $R$ required by the algorithm is just $\log(M/\sigma)$. Hence the factor $R$ increase in the sample complexity implied by our theoretical analysis is at most a logarithmic factor.
Furthermore, in our numerical experiments, the algorithm recovers the regression vectors even when the same samples are used in each round. However, we agree with the reviewer that without a theoretical proof it is unclear if it will not be necessary for some extreme datasets.
---
Rebuttal Comment 1.1:
Comment: Thanks to the authors for addressing my questions, and points about the presentation. I will maintain my score and positive review. | null | NeurIPS_2024_submissions_huggingface | 2,024 | Summary: The paper proposes an algorithm to solve linear regression problems from batched data. The regression coefficients for each batch are assumed to be heterogeneous but come from $k$ possible values. The algorithm proposes a few novel approaches to identify which medium-sized batches are close to a given batch. It also improves the statistical error by estimating a low-rank subspace from small batches. The knowledge-integration techniques developed in the paper are intriguing, and the numerical performance is better than [KSS+20].
Strengths: The introduction of the algorithm is mostly clear, despite its complexity. The intuitions are well-explained. The gradient clipping technique is novel compared with [KSS+20]. Also, the use of gradient descent in the algorithm presents an interesting contribution.
Weaknesses: In line 123, the authors claim that the model does not rely on the previous assumptions. Authors can also briefly discuss how their algorithm innovations circumvents these assumptions.
Some minor issues about writing:
The sentence in line 332 is very long and hard to follow.
In line 836, there is an error.
Technical Quality: 4
Clarity: 3
Questions for Authors: The number of small batches required scales with $d$, which is relatively large in high dimensions. Can authors give specific real-life examples to justify the assumptions on $|B_s|$ and $|B_m|$ in Theorem 2.1?
Authors claim that $k$ can be infinity, as long as the constraints of $\alpha$ are satisfied. Do authors test the claim numerically?
Confidence: 3
Soundness: 4
Presentation: 3
Contribution: 3
Limitations: See weakness.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the careful review and constructive feedback, and for appreciating the contribution, novelty, presentation clarity, and intuition.
Regarding your comments and questions:
Line 123: Section 1.3 summarizes the algorithm’s innovations, and we will elaborate on how they help overcome assumptions in prior works.
Line 332: We will break it into several sentences so it is easier to follow.
Line 836: We will add the missing reference.
Next, we answer the two questions raised in the review.
For an example motivating the number of small batches, observe that in many recommendation systems, most users rate only a few items. The ratings by any such user may serve as a small batch. Since these systems have many users, the number of small batches will also be large. Observe also that typically relatively few users provide a fair number of ratings, and they can serve as medium-sized batches.
For a test justifying that $k$ can be large, Figure 4 in the appendix shows an experiment where the number $k$ of subpopulations is large (100), while the number of subpopulations with sufficiently many batches for enabling recovery is small (4). Our theoretical analysis shows that this experiment can be repeated with even larger $k$, without impacting the algorithm’s performance. | null | null | null | null | null | null |
Causal Effect Estimation with Mixed Latent Confounders and Post-treatment Variables | Reject | Summary: This paper investigates the problem of latent post-treatment bias in causal models where there exists some proxy variables of the latent confounder and post-treatment variables. The authors first derive a general form of latent post-treatment bias which is intractable in most situations (except in special cases such as linear SCM). The authors state that the latent post-treatment bias can be arbitrarily bad for existing proxy-based causal inference methods. They then propose an identifiable VAE-based causal inference algorithm under the assumption that at least one dimension of each sufficient statistic of the latent prior is invertible. The proposed method is evaluated on both synthetic and real-world datasets to demonstrate its causal effect estimation capability with the presence of both latent confounders and post-treatment variables.
Strengths: • Causal reasoning in the context of latent confounder and post-treatment variables is an important topic especially with observational data.
• The authors clearly state the necessary assumptions for the identifiability of true latent variables, and the logic of determining the dimensions of $\boldsymbol{C}$ and $\boldsymbol{M}$ is well presented.
• The paper has a well-established theoretical basis.
Weaknesses: • For the illustrative example in the introduction, it might be better to explicitly specify what the post-treatment variable is.
• Other existing works [1-3] on identifying latent confounder/mediators based on the iVAE architecture should also be included in the related work.
• The role of post-treatment variables $\boldsymbol{M}$ seems to be a bit ambiguous. To be specific, is Theorem 4.1 valid for all types of relationships between $\boldsymbol{M}$ and $Y$?
• The illustration of (iv) in Assumption 3 is a little confusing, as it assumes one extra degree of freedom on the prior parameters of $\boldsymbol{Z}$ and is critical to the identifiability of $\boldsymbol{Z}$ from $\boldsymbol{X}$. More explanation on this point will be appreciated.
• The empirical evaluation consists of only one real-world dataset, which somehow limits the applicability of the proposed method.
References:
[1]. Zhou, D., & Wei, X. X. (2020). Learning identifiable and interpretable latent models of high-dimensional neural activity using pi-VAE. Advances in Neural Information Processing Systems, 33, 7234-7247.
[2]. Sorrenson, P., Rother, C., & Köthe, U. (2020). Disentanglement by nonlinear ica with general incompressible-flow networks (gin). arXiv preprint arXiv:2001.04872.
[3]. Jiang, Z., Liu, Y., Klein, M. H., Aloui, A., Ren, Y., Li, K., ... & Carlson, D. (2023). Causal Mediation Analysis with Multi-dimensional and Indirectly Observed Mediators. arXiv preprint arXiv:2306.07918.
Technical Quality: 3
Clarity: 3
Questions for Authors: • Is $DEV(\tilde{f}(\boldsymbol{X})) = \mathbb{E}[\mathbb{E}[Y | T = 1, \tilde{f}(\boldsymbol{X})] - \mathbb{E}[Y | T = 0, \tilde{f}(\boldsymbol{X})]]$ in Eq. 4 also based on Lemma 3.1? If yes, it should be explicitly stated.
• In what cases can the bias in Theorem 3.2 be arbitrarily bad besides the causal model assumed by linear SCM in Corollary 3.3?
• What is the rationale behind the simulation procedure of $\boldsymbol{C}$ and $\boldsymbol{M}$ in Eq. 11 if the latent confounder represents “job seniority” as elaborated in the introduction? How do you anticipate the estimation error to change if we increase the complexity of the neural network $NN_f$ in Eq. 11?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors do not include a paragraph discussing the limitations and potential societal impact of this work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: The authors deeply appreciate your insightful comments to make our paper better. We hope that we have addressed your concerns in our responses. If you have further questions, we'd be happy to continue the discussions.
***Comment 1: Specify the post-treatment variable for the example.***
**Response:** Thank you for your valuable suggestion. The post-treatment variables in the example are subset of required skills that are causally influenced by the treatment (i.e., switching a job from onsite to online), while influencing people's decisions on applying for the job. For instance, switching to online work might require stronger communication skills, which could affect people's application decisions.
***Comment 2: Existing works [1-3] should be included in the related work.***
**Response:** Thank you for pointing out these important works. [1] proposed GIN network in iVAE, which is invertible with volume preservation. [2] generalized [1] by modeling spike outputs with Poisson distribution. [3] adapted iVAE to identify multiple latent mediators from high-dimensional observations. We will include them in the related work.
***Comment 3: Is Theorem 4.1 valid for all types of relations between 𝑀 and 𝑌?***
**Response:** Thank you for your valuable feedback. Theorem 4.1 is valid for all types of relations between $M$ and $Y$. The reason is that, the non-factorized part in the sufficient statistics of the conditional prior of $Z$ allows for arbitrary dependence between latent variables $Z$ (which includes both $C$ and $M$) and $T, Y$ (see Assumption 2). Therefore, it implicitly allows for any relation between $M$ and $Y$.
***Comment 4: The illustration of (iv) in Assumption 3 is a little confusing.***
**Response:** Thank you for pointing this out. In this paper, we aim to prove that for CiVAE, if for two $\theta, \tilde{\theta} \in \Theta$ we have $p_{\theta}(X | Y, T)=p_{\tilde{\theta}}(X | Y, T)$, the map from $\tilde{Z}$ to $Z$ is element-wise bijective. The reason we need $k+1$ $(Y, T)$ points (instead of just $k$) is that, after plugging in $k+1$ $(Y, T)$ points in the equality and taking diff. with the first equation, we can end up with $k$ linearly independent equations (see Eq. (19)), which is necessary to prove the sufficient statistics $S_{f}$ can be identified up to bijective transformation. The details are in Step I of C.4.1.
In addition, Section B.2.3 of [4] shows that if the exponential family parameters $\lambda_{ij}(Y, T)$ are independent, ***(iv)*** can be satisfied with arbitrary $k+1$ different (Y, T) points.
***Comment 5: The empirical evaluation consists of one real-world dataset.***
**Response:** Thank you for your constructive feedback. We have conducted experiments on the real-world IHDP [5] and the Jobs [6] datasets according to your advice, where we follow the same dataset generation process as the Company dataset to simulate the latent confounders $C$ and the latent post-treatment variables $M$ in the observed covariates $X$. We normalize the outcome $Y$ such that the reported errors become comparable. The results are summarized as follows:
||IHDP||Jobs||
|-|-|-|-|-|
||ATE|Err $\downarrow$|ATE|Err $\downarrow$|
|CEVAE|-0.463 ± 0.081|0.787|0.130 ± 0.047|0.525|
|TEDVAE|-0.317 ± 0.074|0.641|0.185 ± 0.069|0.470|
|CiVAE|0.178 ± 0.138|0.146|0.602 ± 0.162|-0.053|
|True ATE| 0.324 ± 0.000 | 0.000 | 0.549 ± 0.000 | 0.000 |
The results further demonsntrate that CiVAE shows more robustness to latent post-treatment bias.
***Comment 6: Is $DEV(\tilde{f}(X)) = \mathbb{E}[\mathbb{E}[Y|T=1, \tilde{f}(X)] - \mathbb{E}[Y|T=0, \tilde{f}(X)]]$ in Eq. 4 also based on Lemma 3.1?***
**Response:** Thank you for the important question. This step is based on the definition of DEV in Definition 1, where we substitute $X'$ with $\tilde{f}(X)$.
***Comment 7: In what cases can the bias in Theorem 3.2 be arbitrarily bad?***
**Response:** Thank you for the important question. We provide two linear cases only to intuitively show the latent post-treatment bias (which can be calculated with the coefficients of the linear structural equations). The latent post-treatment bias in the general, nonlinear cases is provided in Eq. (4), which can also be arbitrarily bad, but it is abstract and cannot be further simplified without further assumptions on the causal generation process of the dataset.
***Comment 8-1: Rationale behind the simulating $C$ and $M$ in Eq. 11 if $C$ represents job seniority***.
**Response:** Thank you for the important question. The example in the introduction aims to provide a concrete example of possible scrambling of latent confounders $C$ (i.e., seniority of the job) and latent post-treatment variables $M$ (i.e., work-mode relevant job skills) in the observed covariates $X$. However, both $C$ and $M$ are difficult to quantify in the Company setting. Therefore, given $(X, T, Y)$, we simulate $C$ and $M$ from scratch, whereas Eq. (11) ensures that the information of the real-world data, i.e., the marginal distribution of the observables, i.e., $(X, T, Y)$, is preserved in the semi-simulated dataset.
***Comment 8-2: How will estimation error change if we increase the complexity of the neural network $NN_{f}$ in Eq. 11?***
**Response:** Thank you for the important question. As we explained in our response to your comment 8-1, Eq. (11) ensures that the marginal distribution of the observable $(X, T, Y)$ is consistent with the real-world data. A more complicated $NN_{f}$ will make the semi-simulated dataset deviate more from the real-world data (due to overfitting), but it wouldn't affect the estimation step (which is independent of the generation of the semi-simulated dataset).
[4] Variational autoencoders and nonlinear ICA: A unifying framework.
[5] Bayesian nonparametric modeling for causal inference.
[6] Evaluating the econometric evaluations of training programs with experimental data.
---
Rebuttal Comment 1.1:
Title: Response to rebuttal
Comment: Thank you for your response to my comments, though the negative ATE error on the Jobs dataset looks a bit weird. Also, for the second step in Eq. 4 where you replace $\boldsymbol{X}'$ with $\tilde{f}(\boldsymbol{X})$, I mean this substitution probably needs the injectivity statement in Lemma 3.1.
I think the authors' response mostly addresses my concerns, and I'm willing to raise my score.
---
Reply to Comment 1.1.1:
Comment: **Dear reviewer jEuj,**
Thank you for the acknowledgment. We are glad to hear that our responses mostly address your concerns. We will try our best to integrate your valuable comments into the paper. Here, we provide responses to your further questions.
**1.** In the current version, we report the difference between the true ATE and the estimated ATE as the error. We will also provide the absolute difference to make the comparison between different methods more straightforward.
**2.** We note that $DEV(X^{\prime})$ is the ATE estimator when controlling an arbitrary variable $X^{\prime}$. $DEV(X^{\prime})$ is defined as: $$DEV(X^{\prime}) := \mathbb{E}_{p(X^{\prime})}[DCEV(X^{\prime})] := \mathbb{E}[\mathbb{E}[Y|T=1, X^{\prime}] - \mathbb{E}[Y|T=0, X^{\prime}]],$$ where ":=" denotes the RHS is the definition of the LHS. Here, $X^{\prime}$ is an arbitrary variable.
For Eq. (4), we are discussing the bias when estimating the ATE when controlling the variable $\tilde{f}(X)$, i.e., the latent variable inferred from $X$ via $\tilde{f}$. The bias is defined as the difference between the true ATE and the estimated ATE, i.e., $ATE - DEV(\tilde{f}(X))$. Therefore, the second step simply uses the definition of $DEV$, i.e., $$ATE - DEV(\tilde{f}(X)) := ATE - \mathbb{E}_{p(\tilde{f}(X))}[DCEV(\tilde{f}(X))] := \mathbb{E}[\mathbb{E}[Y|T=1,\tilde{f}(X)] - \mathbb{E}[Y|T=0, \tilde{f}(X)]],$$
where no derivation is involved in this step. The actual derivations occur in the 3rd and 4th step, where the 4th step uses Lemma 3.1 to remove the injective in the condition. We will change "=" in the second equation of Eq. (4) to ":=" to avoid confusion. Thank you again for raising this important question to make our paper clearer.
Best,
Authors | Summary: The authors deal with latent post-treatment bias for proxy-based methods which are employed for causal effect estimation.
They show that post-treatment variables can be latent and mixed into the observed covariates along with the latent confounders.
The authors transform the confounder-identifiability problem into a tractable pair-wise conditional independence test problem.
They prove that the latent confounders and latent post-treatment variables can be identified up to bijective transformations. Finally, they provide experimental analysis for their approach.
Strengths: The paper deals with a very interesting problem. The proposed method appeared to be theoretically robust. The method is evaluated with proper experimental analysis on synthetic and real-world datasets and compared with multiple benchmarks.
Weaknesses: Here I provide some weaknesses of the paper:
* Bi-directed edges in Figure 1 are not defined properly.
* Do-operator in equation 3 is not defined in detail.
* Assumptions in Assumption 2 should be described in more detail.
* The proposed method seems to depend on a lot of assumptions. Assumptions 1,2,3 each contain multiple assumptions. The authors should explain how their assumptions hold for the real-world scenarios they considered in their experiment section.
Technical Quality: 3
Clarity: 3
Questions for Authors: Here I provide some questions:
* Why do the authors assume that the models can recover the true latent space up to invertible transformation (line 151) ? How realistic is that assumption?
* Do the proxy-based methods claim to perform well for the causal graphs in Fig 1c?
* How do these assumptions hold when X is high-dimensional?
* What values of K_C and K_M are considered?
Confidence: 2
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors discussed a very few limitations of their paper but more discussion should be done.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: The authors deeply appreciate your insightful comments to make our paper better. We hope that we have addressed your concerns in our responses. If you have further questions, we'd be happy to continue the discussions.
***Comment 1: Bi-directed edges in Figure 1 are not defined properly.***
**Response:** Thank you for pointing this out. Bi-directed edges in Fig. 1 means arbitrary causal relationship between each of the post-treatment variable $M_{i}$ and the outcome $Y$. This will be clarified in the revised paper.
***Comment 2: Do-operator in equation 3 is not defined.***
**Response:** Thank you for pointing this out. The do-operator represents an intervention in the causal model, where $\mathbb{E}[Y|do(T=t)]$ represents the expected value of $Y$ if we were to intervene and set the treatment $T$ to value $t$ for the entire population, regardless of the natural causes of $T$. We will add this explanation to the paper for clarity.
***Comment 3: Assumption 2 should be described in more detail.***
**Response:** Thanks for the constructive suggestion! The assumption states the mild condition the prior of $Z$, i.e., $P(Z|Y, T)$, should satisfy for individual and bijective identification of $Z$ from the observables $(X, Y, T)$. Specifically, it assumes that $P(Z|Y, T)$ belongs to a general exponential family with two-part sufficient statistics $S(Z)$: ***(i)*** A factorized part $S_f(Z)$, where each component $S_i(Z_i)$ has at least one invertible dimension. This ensures individual/bijective identifiability of latent variables. ***(ii)*** A non-factorized part $S_{nf}(Z)$ modeled by a ReLU deep neural network, which allows complex dependencies among latent variables. We will add the above explanation to the paper.
***Comment 4: How the assumptions hold for the real-world scenarios in the experiment section?***
**Response:** Thank you for your insightful suggestion. We provide empirical justification for the three assumptions as follows:
For Assumption 1, since the dimension of the observed covariates $X$, i.e., $K_{X}$, is larger than the dimension of the latent variables $Z=[C, M]$, i.e., $K_{Z} = K_{C} + K_{M}$, it would be likely that $f$ is injective or very close to injective due to the low probability that two distant points in the low dimensional latent space are mapped by $f$ to the same point in the high-dimensional $X$ space.
Assumption 2, i.e., the conditional prior of latent variables following a general exponential family, would be a reasonable approximation of the true prior, as general exponential family includes the most commonly used distributions, and the non-factorized part of the sufficient statistics parameterized by a ReLU deep neural network allows complex (conditional) dependence among the latent variables.
Assumption 3 ensures that the dataset and model class we choose allow the identification, where ***(i)*** states that the noise distribution should not be degenerative, which depends on the dataset quality. ***(ii)***, ***(iii)*** can be trivially satisfied by neural networks. For ***(iv)***, Section B.2.3 [1] shows that if the exponential family parameters $\lambda_{ij}(Y, T)$ are independent, ***(iv)*** can be satisfied with arbitrary $k+1$ different $(Y, T)$ points. This can be satisfied by most exponential family distributions.
***Comment 5: Why do the authors assume that the models can recover the true latent space up to invertible transformation (line 151)?***
**Response:** Thanks for the valuable feedback. We want to clarify that this assumption is made only for existing proxy-based methods, not for our proposed CiVAE. This is actually the most optimistic assumption we could make for these methods as they exactly recover the latent space. We show that even under this optimistic assumption, the ATE/CATE estimation is still arbitrarily biased when latent post-treatment variables are present.
***Comment 6: Do the proxy-based methods claim to perform well for the causal graphs in Fig 1c?***
**Response:** Thanks for the valuable feedback. Most proxy-based methods do not explicitly address the causal graph in Fig 1c. They typically rely on the strong ignorability assumption, which implies both that ***(i)*** all confounders are captured by observed covariates, and ***(ii)*** no post-treatment variables are included. However, these methods often focus more on the first implication and ignore the potential presence of post-treatment variables in the proxies (which leads to violation of ***(ii)***). This can lead to biased ATE/CATE estimates when latent post-treatment variables are mixed with confounders in the observed covariates, and motivates us to design CiVAE to address the latent post-treatment bias.
***Comment 7: How do these assumptions hold when $X$ is high-dimensional?***
**Response:** Thanks for raising this important point. Assumption 1 (noisy-injectivity) implies that the dimension of $X$ is **larger** than or equal to the latent space, which is typically satisfied when $X$ is high-dimensional. Assumption 2 puts a general prior on the latent variables, whereas Assumption 3 contains standard regularity conditions, which are both irrelevant to the dimensionality of $X$. Therefore, all three assumptions in this paper hold for high-dimensional covariates $X$.
***Comment 8: What values of K_C and K_M are considered?***
**Response:** Thanks for the valuable feedback. In our experiments, we considered various combinations of $K_{C}$ and $K_{M}$: For the simulated datasets, we empirically set $K_C = 3$ and $K_M = 3$. For the real-world Company dataset, we empirically set $K_{C} = 5$ and $K_{M} = 3$. Additionally, in Section 5.3, for the Company dataset, we conducted a sensitivity analysis where we varied the ratio of $K_{C}$ to $K_{M}$ from $\\{2:6, 3:5, 4:4, 5:3, 6:2\\}$. This analysis demonstrates the robustness of CiVAE under different latent variable configurations.
[1] Variational autoencoders and nonlinear ICA: A unifying framework. | Summary: This paper addresses the challenge of causal inference with observational data, particularly when direct measurement of confounders is infeasible. The authors propose a new method, Confounder-identifiable Variational Autoencoder (CiVAE), to mitigate post-treatment bias using observed proxies for both latent confounders and latent post-treatment variables. The paper provides a theoretical analysis under specific assumptions and validates the proposed approach through experiments on both simulated and real-world datasets.
Strengths: * The paper investigates a critical question concerning the mitigation of post-treatment bias, which is essential in various practical scenarios.
* The ideas presented in the paper are clear and easy to follow, and the theoretical analysis is well-established.
Weaknesses: * In practical scenarios, interactions among latent factors are often present and can significantly impact the estimation. It would be beneficial if the authors could elaborate on how their method addresses these interactions and whether there are any theoretical guarantees regarding their handling in the proposed approach.
* The theoretical guarantees rely on strong assumptions, and the assumptions are hard to verify in practice. In assumption 1, the paper assumes an injective function of latent confounders and latent post-treatment variables into the observed proxy. This is a strong assumption, and it will be much harder to meet the assumption in general when the function is nonlinear. The specific setup with strong assumptions limits the practical applicability of the proposed approach. It would be helpful if the authors could provide examples where these assumptions hold and demonstrate how they can be verified.
* The experiment lacks sufficient details on setup and implementation. Could the authors provide more specific information to enhance understanding of the empirical results?
Technical Quality: 3
Clarity: 2
Questions for Authors: See Weaknesses.
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: * The proposed method relies on very strong assumptions to ensure identifiability, which can be challenging to verify in practical applications.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: The authors deeply appreciate your insightful comments to make our paper better. We hope that we have addressed your concerns in our responses. If you have further questions, we'd be happy to continue the discussions.
***Comment 1: How CiVAE addresses interactions among latent variables and theoretical guarantees?***
**Response:** Thank you for raising this important point. We have extended our analysis to interactions among latent variables in Section 4 of the Appendix. Specifically, we consider two cases: ***(i)*** Intra-interactions among latent mediators $M$ and ***(ii)*** Inter-interactions between $M$ and latent confounders $C$.
**Theoretically**, the inferred latent variables $\hat{Z}$ via Eq. (10) still individually identify the true latent variables, i.e., $[C, M]$, up to bijective map, as Assumption 2 allows arbitrary (conditional) dependence among latent variables. When interactions exist, we can use more general causal discovery methods, e.g., the PC algorithm, to further identify the latent confounders $C$ in $\hat{Z}$. The reason is that, since latent post-treatment variables $M$ cannot causally influence $C$ (otherwise $C$ will be post-treatment), and the PC algorithm orients edges in the causal graph via colliders, latent confounders $C$ can be properly oriented by the PC algorithm as they form colliders with $T$ and therefore be identified from $\hat{Z}$.
**Empirically**, we simulate two datasets according to the above two cases, and show that CiVAE can be adapted to handle these interactions by adopting the PC algorithm in the second step of confounder identification. Tables 1 and 2 in the Appendix demonstrate that the adapted CiVAE remains more robust to latent post-treatment bias compared to baselines even when interactions exist among latent variables.
***Comment 2-1: Assuming an injective function of latent confounders and post-treatment variables into the observed proxy is strong, and it will be harder to meet the assumption in general when the function is nonlinear.***
**Response:** Thank you for your insightful comments. The proposed noisy-injectivity assumption is weaker than a strict injective assumption, as it allows the map from latent variables $Z$ (including latent confounders $C$ and latent post-treatment variables $M$) to the **observed** covariates $X$ to be non-injective due to the presence of noise.
In addition, since the dimension of the observed covariates $X$, i.e., $K_{X}$, is larger than the dimension of the latent variables $Z=[C, M]$, i.e., $K_{Z} = K_{C} + K_{M}$, it would be likely that $f$ is injective or very close to injective in practice due to the low probability that two distant points in the low dimensional latent space are mapped by $f$ to the same point in the high-dimensional $X$ space. We will clarify the above points in the revised paper.
***Comment 2-2: It would be helpful if the authors could provide examples where assumptions hold and demonstrate how they can be verified.***
Thank you for your insightful comments. For the remaining two assumptions, we provide further discussion as follows:
Assumption 2, i.e., the conditional prior of latent variables following a general exponential family, would be a reasonable approximation of the true prior. The reason is that, the non-factorized part of the sufficient statistics of general exponential family defined in Eq. (7) is parameterized by a ReLU deep neural network, which allows complex (conditional) dependence among the latent variables.
Assumption 3 ensures that the dataset and model class we choose allow the identification. Specifically, ***(i)*** denotes that the noise distribution should not be degenerative, which depends on the dataset quality. ***(ii)***, ***(iii)*** can be trivially satisfied by neural networks. For ***(iv)***, Section B.2.3 of [1] shows that if the factorized part of the exponential family parameters $\lambda_{ij}(Y, T)$ are independent (which is very weak), ***(iv)*** can be satisfied with **arbitrary** $k+1$ different $(Y, T)$ points.
[1] Variational autoencoders and nonlinear ICA: A unifying framework.
***Comment 3: The experiment lacks sufficient details on setup and implementation. Could the authors provide more specific information to enhance understanding of the empirical results?***
**Response:** Thank you for your constructive feedback. The detailed setup and implementation are summarized as follows:
For the simulated datasets, we empirically set the dimension of the latent confounders and the latent post-treatment variables to $K_{C}=3$ and $K_{M}=3$, which leads to $K_{Z}=K_{C} + K_{M}=6$. The dimension of the observed covariates is set to $K_{X}=20$. The dataset generation process for both the ***mixedLatentMediator*** and ***mixedLatentCorrelator*** cases have been formulated in the paper. For CiVAE, the inference network $q_{\phi}(Z|X, Y, T)$ is implemented as an MLP with one hidden layer with hidden dimension $K_{H}=K_{Z}$. For the prior network $p_{S, \lambda}(Z|Y, T)$: for the factorized part, we implement $S_{f}(Z) = [Z, Z^{2}]$ and implement $\lambda_{f}(Y, T)$ as a dense layer of $\mathbb{R}^{2} \rightarrow \mathbb{R}^{2 \times K_{Z}}$; for the non-factorized part, we implement $S_{nf}(Z)$ as a ReLU neural network with hidden dimension of $K_{H}=K_{Z}$ and output dimension of 1, and implement $\lambda_{nf}(Y, T)$ as a dense layer of $\mathbb{R}^{2 \rightarrow 1}$. We train the model according to Eq. (10) for ten epochs, conduct ten random runs of the experiment, and report the average and standard deviation.
For the real-world dataset, we select 52 most common job skills as $X$ (which leads to $K_{X}=52$). We set the dimension of the latent space, i.e., $K_{Z} = 8$, and vary the ratio $K_{C} : K_{M}$ from $2:6$ to $6:2$ and plot the results in Fig. 3. The implementation and training of CiVAE follow the same setting as the simulated datasets. We will include the above details in the revised paper.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for providing detailed responses, and these partially address the concerns. I will keep my rating.
---
Reply to Comment 1.1.1:
Comment: **Dear Reviewer kHjK,**
Thank you for the acknowledgment. We are glad that our responses partially address your concerns. We will take your remaining comments seriously and further polish the paper, and we are committed to integrating your valuable comments into the paper. Thank you again for your time and efforts.
Best,
Authors | Summary: In this paper, the authors investigated the issue of latent post-treatment bias in causal inference from observational data. They showed that estimator of existing proxy-of-confounder-based methd, i.e., DEV (f(X)), is an arbitrarily biased estimator of the Average Treatment Effect (ATE), when the selected proxy of confounders X accidentally mixes in latent post-treatment variables (Theorem 3.2). To address this issue, they proposed the Confounder-identifiable VAE (CiVAE), which identifies latent confounders up to bijective transformations under a mild assumption regarding the prior of latent factors. They showed that controlling for latent confounders inferred by CiVAE can provide an unbiased estimation of the ATE. Experiments on both simulated and real-world datasets demonstrate that CiVAE exhibits superior robustness to latent post-treatment bias compared to state-of-the-art methods.
Strengths: Being able to recover latent variables (cofounders, post-treatment variables, or others) from observations is challenging and important. Ignoring latent variables or assuming non-existence of latent variables is unrealistic and can lead to the wrong conclusion and decisions. The authors further motivated the importance of recovering latent cofounders, post-treatment variables and the consequence of not doing so (Theorem 3.2). The solution provided shows originality and quality.
Weaknesses: The presentation can be improved.
Technical Quality: 3
Clarity: 3
Questions for Authors: Is Fig. 1(c) general enough? It assumes that all latent variables are either confounders or post-treatment variables. However, there can be other types of latent variables, such as:
1. Pre-treatment Variables: These latent variables influence the treatment (T) but do not directly affect the outcome (Y) or the additional observation (X). They exist before the treatment is applied and can introduce selection bias.
2. Latent Interaction Variables: These latent variables interact with the treatment (T) to influence the outcome (Y). They are not confounders because they do not influence the treatment directly, nor are they post-treatment variables.
3. Latent Mediator Variables: These latent variables mediate the effect of the treatment (T) on the outcome (Y) and are not directly observed.
4. Latent Variables Influencing Both Pre-treatment and Post-treatment States: These latent variables influence the state of the system both before and after the treatment but do not fit the typical definition of confounders or post-treatment variables. For example, a latent mental state might affect both a person's initial willingness to undergo treatment and their behavior or responses post-treatment.
Can the proposed method handle these types too (with some extension), or some of the types are quite disruptive to the proposed methodology?
Confidence: 2
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: n.a.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: The authors deeply appreciate your insightful comments to make our paper better. We hope that we have addressed your concerns in our responses. If you have further questions, we'd be happy to continue the discussions.
***Comment 1: How does CiVAE (with possible extension) address other types of latent variables?***
**Response:** Thank you for the insightful comments. It would indeed be interesting to discuss the behavior and possible extension of CiVAE when different types of latent variables $W$ are scrambled into the observed covariates $X$ alongside the latent confounders $C$ and the latent post-treatment variables $M$.
First, from Assumption 2, we know that CiVAE allows arbitrary (conditional) dependence among latent variables $Z$ that generates $X$ to **individually** and **bijectively** identify them from the observables $(X, Y, T)$. Therefore, ***regardless of the type of*** $W$, if we denote the inferred latent variables as $\hat{Z}$, we have $\hat{Z}\_{i} \in \\{f\_{k}(W\_{k}), f\_{k'}(C\_{k'}), f\_{k''}(M\_{k''})\\}$, where $f$ is a bijective function. However, for each $i$, whether $\hat{Z}_{i}$ corresponds to type $W$, $C$ or $M$ is unknown.
Therefore, if only $C$ and $M$ exist, to further distinguish $C$ from $M$ in $\hat{Z}$, since $C$ are pre-treatment and $M$ are post-treatment, a clever strategy is to select variables in $\hat{Z}$ where independence increases after conditioning on $T$, as only $C$ form V-structure with $T$ (i.e., $C_{i} \rightarrow T \leftarrow C_{j}$), where their dependence increases after conditioning on $T$. In contrast, $C$ and $M$ form chain structure with $T$ (i.e., $C_{i} \rightarrow T \rightarrow M_{j}$), and $M$ form fork structure with $T$ (i.e., $M_{i} \leftarrow T \rightarrow M_{j}$), where dependence decreases after conditioning on $T$.
We can use similar logic to reason with the case when different types of $W$ exist.
***Case 1. Pre-treatment variables that do not direct influence the outcome.***
If $W$ are pre-treatment variables, since they causally influence the treatment $T$, they still form V-structure with $T$, and therefore CiVAE will identify them in $\hat{Z}$ and include them in the control set after the pair-wise independence test.
Here, we need to further divide the pre-treatments $W$ into two cases.
The first case is that $W$ are correlated with $Y$. In this case, controlling the identified $W$ can reduce both confounding bias and variance.
Another case is that $W$ are not-correlated with $Y$. In this case, controlling $W$ is still unbiased (which achieves the main purpose of removing latent post-treatment bias of the paper), but the estimation variance could increase. A trivial extension of CiVAE to address this issue is to conduct another round of independence test among the identified confounders $\hat{C}$ (with and without the outcome $Y$ as the condition) and keep the pairs in $\hat{C}$ where dependence increases after conditioning on $Y$ (as true confounders form V-structure with $Y$). The discussion will be included in the revised paper.
***Case 2. Latent interaction variables.***
The case where $W$ are latent interaction variables is more complicated, as the relation between $W$ and the treatment $T$ is undetermined. If each $W_{i}$ is confounded with $T$ via an independent unobserved confounder $U_{i}$, $W$ have the following relationship with $T$, i.e., $W\_{i} \leftarrow U\_{i} \rightarrow T \leftarrow U\_{j} \rightarrow W_{j}$. Since the dependence among $W$ will increase after conditioning on $T$, $W$ will be included in the control set. However, if $W$ is confounded with $T$ via a shared confounder $U$, the relation becomes $T \leftarrow U \rightarrow \{W\_{i}, W\_{j}\}$, controlling $T$ would probably decrease the dependence (as $T$ contains the confounder information). In this case, $W$ won't be included in the control set.
However, regardless of whether $W$ are included in the control set, CiVAE remains unbiased, because $W$ do not influence the identification of confounders $C$. In addition, $W$ are still pre-treatment, such that no post-treatment bias can be introduced in the ATE/CATE estimation.
***Case 3. latent mediator variables.***
If $W$ are latent mediators, $W$ is a special case of post-treatment variables $M$. Since $W$ form the fork structure with the treatment $T$ (i.e., $W\_{i} \leftarrow T \rightarrow W\_{j}$), their dependence will decrease after conditioning on $T$, and therefore they will be successfully excluded from the control set to eliminate latent post-treatment bias.
***Case 4. latent variables influencing both pre-treatment and post-treatment states?***
If $W$ are latent variables that influence both pre-treatment and post-treatment states, since $W$ still forms the fork structure with the treatment $T$ (i.e., $W\_{i} \leftarrow T \rightarrow W\_{j}$), their dependence will decrease after conditioning on $T$, and therefore they will be successfully excluded from the control set to eliminate latent post-treatment bias.
---
Rebuttal Comment 1.1:
Comment: Thank you. I've read your rebuttal, responses, and the other reviews. I will keep an eye on the reviewers' discussion phase, if there is one.
---
Rebuttal 2:
Comment: **Dear reviewer VF9h,**
Thank you for the acknowledgment. We will try our best to integrate your valuable comments into the paper. Thank you again for your time and efforts.
Best,
Authors | null | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Score Distillation via Reparametrized DDIM | Accept (poster) | Summary: This paper introduces a novel algorithm called Score Distillation via Inversion (SDI) that enhances the quality of 3D shape generation. By inverting Denoising Diffusion Implicit Models (DDIM) at each step and incorporating the initial noise into the estimated score, SDI addresses the over-smoothing and detail loss issues associated with traditional Score Distillation Sampling (SDS) methods in 3D generation. Experimental results demonstrate that SDI significantly improves the quality of 3D shapes, closely matching the quality of 2D image generation, without the need for additional neural network training or multi-view supervision. The work provides valuable insights into understanding and improving 3D asset generation with diffusion models.
Strengths: 1. The paper provides a theoretical analysis that links Score Distillation Sampling(SDS) with Denoising Diffusion Implicit Models(DDIM), offering a deeper understanding of the discrepancies between 2D and 3D generation process, which is different from the experimental enhancement made by most of the previous methods.
2. The modification to the existing SDS method is straightforward and does not require training additional networks or multi-view supervision. Furthermore, SDI significantly improves the quality of 3D shape generation, which makes it a practical solution for enhancing 3D generation.
3. This paper provides thorough analysis and present a set of convincing ablation experiments.
Weaknesses: 1. According to the authors’ theory and hypothesis, the better the noise term is Eq. (8) is estimated, the better the 3D generation quality is improved. However, in the ablations about the choice of k(x), SGD optimization doesn’t show significant improvement. Also, this phenomenon can be observed from Figure.10, in which green curve and purple curve present lowest and comparable MSE with orange curve, while the orange curve get the best result. I would appreciate it if the authors can provide some explanations.
2. One of the paper's key perspective is that SDS introduces high variance. I have read another paper[1] dedicated to minimizing the variance introduced in SDS, and maybe it's also worthwhile to compare with that.
[1] Tang, B., Wang, J., Wu, Z., & Zhang, L. (2023). Stable score distillation for high-quality 3d generation. ArXiv Preprint ArXiv:2312.09305.
Technical Quality: 3
Clarity: 3
Questions for Authors: Please refer to the weaknesses part.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Please refer to the weaknesses part.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the thorough review and recommendations. We provide clarifications below.
# Numerical error in choices of \kappa
Thank you for highlighting this effect in the review, we believe a more detailed explanation will indeed benefit the paper. We touch on the intuition around this question in the general response: it is not exactly true that \kappa with a lower error in eq.8 leads to a better 3D generation. It is true, however, that a more precise solution to eq.8 aligns score distillation more closely with DDIM. Other factors like initialization, view consistency, and DDIM’s generative ability also impact 3D quality. Below we address each of the particular questions in detail.
1. The green configuration in Figure 10 ( $\gamma\_\text{fwd}=1, \gamma_\text{inv}=1$) indeed has the lowest error in eq.8. This is consistent with [1, 2], which claim that DDIM inversion accumulates numerical errors in a conditional generation [1] while achieving almost perfect inversion in an unconditional generation [2]. When we use unconditional generation together with unconditional inversion, we succeed at closely matching the generation guidance to a DDIM trajectory. This trajectory, however, is unconditional. Thus the final 3D shape does not correspond to the desired (or any) prompt: exactly what we see in Figure 10.
2. As it was correctly noted, the purple and red configurations in Fig.10 (conditional vs. unconditional inversion) exhibit the same approximation error. In practice, both configurations yield detailed 3D shapes (also Fig.10). However, as described in the general reply (see “Guidance Term”), a combination of unconditional inversion and conditional generation leads to the accumulation of saturation in the final shape. Please also refer to Fig.5 of the rebuttal PDF for the visualization of this effect.
3. Regarding the error of SGD and Random noise in Figure 9, the two lines are mostly within one sigma of each other (please note the translucent areas behind each line signifying standard deviation). The numerical error on the right is computed and averaged across 10 prompts. We did not notice any improvement in 3D generation quality nor in the error induced in eq.8 by performing the stochastic gradient descent on the noise term. The SGD optimization of 10 steps was initialized to a randomly sampled noise image, which might explain the resemblance of the generated shapes to the randomly sampled noise.
## Comparison with StableSD
Thank you for providing a useful reference. We would like to point out that we already have a qualitative comparison with Stable Score Distillation in the appendix (StableSD in section E3).
Mathematically, the idea behind this line of work is that the mode-seeking term in score distillation ($\\epsilon\_\theta^t$) exhibits high variance. Thus the second term ($- \epsilon$ in SDS or the LoRA in VSD[3]) reduces the variance of the first. A similar idea was also introduced in SteinDreamer [4], where authors numerically analyze variance in SDS and VSD. Both StableSD and SteinDreamer propose to use control variates to minimize the noisiness of the guidance. Please note, that in both of these works, the excessive variance is compensated by subtracting another noise term by finding coefficients that allow for a better correlation.
In our work, however, we explain the second term in SDS not as a control variates, but rather as a projection of the previous step in a re-parametrized DDIM trajectory. Moreover, our variance reduction comes not from using control variates, but from finding a better noise to add to the rendering in the first place. We show that the excessive variance in $\epsilon\_\theta^t(x_t)$ comes from the fact that $x_t$ was obtained using a randomly sampled noise, while our analysis reveals that it should follow a very particular structure (eq.8). By finding a better approximation of the desired noise term we eliminate the root cause of the excessive variance instead of compensating for it later.
Unfortunately, the official implementation of Stable Score Distillation is not yet publicly available (nor for SteinDreamer) and we were not able to reproduce their reported results. We will be happy to augment our Table 1 with a quantitative comparison with StableSD as soon as an official implementation gets published.
[1] Ron Mokady et al. “Null-text inversion for editing real images using guided diffusion models”. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2023, pp. 6038–6047.
[2] Daiki Miyake et al. “Negative-prompt inversion: Fast image inversion for editing with text guided diffusion models”. In: arXiv:2305.16807 (2023).
[3] Wang, Zhengyi, et al. "Prolificdreamer: High-fidelity and diverse text-to-3d generation with variational score distillation." Advances in Neural Information Processing Systems 36 (2024).
[4] Peihao Wang et al. “Steindreamer: Variance reduction for text-to-3d score distillation via stein identity”. In: arXiv:2401.00604 (2023).
---
Rebuttal Comment 1.1:
Comment: Dear reviewer ZvgL,
Thank you again for the recommendations! Could you please let us know if we fully addressed the comments in the review and/or if there is any additional information we could provide? Please note that the author-reviewer discussion period will be over on Tuesday at 23:59 AoE and we won't be able to further reply after this point.
Thank you in advance!
On behalf of all authors | Summary: The paper introduces an effective SDS modification that replaces random noise samples in the SDS objective with those obtained with DDIM inversion. The proposed technique enhances text-to-3D generation, outperforming SDS and being comparable to more sophisticated methods.
Strengths: * The paper is well organized and provides valuable intuitions, illustrations, and discussions over reading. The additional discussion in Appendix E is also beneficial.
* The proposed method is simple, reasonable and well motivated. The derivation is clear.
* The experimental results are promising, and the ablation study in Sec 6.2 is highly valuable and interesting.
Weaknesses: * Inversion with negative guidance is surprising, and its rationale remains somewhat unclear. A more detailed investigation and justification would be useful.
* The idea of using DDIM inversion is very similar to ISM[1]. The primary distinction is guided vs unguided inversion. Although the authors justify the guided inversion with negative guidance in Fig. 10, a thorough discussion and detailed comparisons are essential.
* The quantitative evaluation relies on CLIP-IQA, which does not seem quite suitable for 3D generation evaluation. Conducting a user study would be highly valuable. If it is unavailable, some recent works [2,3] aim to propose automated evaluation metrics. For example, would it be reasonable to evaluate ImageReward[4] following [2]?
* The quantitative results miss NFSD and the qualitative comparisons are presented only for two prompts. Could the authors add NFSD to Tab.1 and provide more visual comparisons with all methods?
* There is no evaluation of diversity under a given prompt. It would be useful to include a few generated results for different seeds.
To sum up, I am generally optimistic about the submission but have some concerns regarding the evaluation and close connection with [1], as discussed above.
[1] Liang et al. LucidDreamer: Towards High-Fidelity Text-to-3D Generation via Interval Score Matching.
[2] He et al. T3Bench: Benchmarking Current Progress in Text-to-3D Generation.
[3] Wu et al. GPT-4V(ision) is a Human-Aligned Evaluator for Text-to-3D Generation.
[4] Xu et al. ImageReward: Learning and Evaluating Human Preferences for Text-to-Image Generation.
Technical Quality: 3
Clarity: 3
Questions for Authors: * Please address the questions and concerns raised in "Weaknesses".
* In Fig.5, the intermediate $x_t$ do not have noise in the background. DDIM inversion must follow the same marginal distributions as the forward process, but the presented $x_t$ do not. We observed the similar effect for images with monotonic regions if inversion is applied directly to a clean image. We believe such images are likely OOD for the pretrained DM, and injecting slight noise with $\sigma_{min}$ to an image fixed the problem. Do the authors have any thoughts on this? Would the solution above work for SDI and how does it affect its performance?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors addressed the limitations and potential negative social impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the thoughtful and constructive review. Below we address the points raised in the review:
# Inversion with negative CFG
Thank you for pointing this out. We agree that adding this intuition to the main paper will benefit the clarity. Below we support the intuition described in the general response with theoretical and empirical analysis.
## Experimental analysis of negative CFG
In the general responce, we base our intuition on the fact that negative CFG attracts generation to the mean of the training dataset. To demonstrate this, we perform a simple experiment of 2D generation with DDIM for different prompts and CFG values of +/-7.5 for each (Fig.3 of the rebuttal PDF). Instead of generating images with opposite prompts, negative CFG directs images to the same class (real estate). This happens because the “real estate” happen to be in the middle of the embedding space, where the process is attracted to.
## Theoretical justification
The effect of attraction to the mean can also be shown more formally. Let’s denote:
$X-\text{full dataset},X_y-\text{prompt conditioned dataset}$
From the optimal denoiser perspective and notations from [1]:
$\hat{\epsilon}\_\theta^t(x_t,\varnothing)=\sum_{i\in X}w_i(x_t)x_i$
$\hat{\epsilon}\_\theta^t(x_t,y)=\sum_{j\in X_y}w_j(x_t)x_j$
Now, rewriting the CFG definition:
$\epsilon_\theta^t(x_t,y)=(1-\gamma)\sum_{i\in X}w_i(x_t)x_i+\gamma \sum_{j\in X_y}w_j(x_t)x_j=$
$=(1-\gamma)[\sum_{i\in X/X_y}w_i(x_t)x_i+\sum_{k\in X_y}w_k(x_t)x_k] + \sum_{j\in X_y}w_j(x_t)x_j=$
$=(1-\gamma)\sum_{i\in X/X_y}w_i(x_t)x_i+\sum_{j\in X_y}w_j(x_t)x_j$
In the final formula, image generation is attracted to the images in the dataset relevant to the prompt (independently from CFG), and also is repulsed or attracted to the mean of the rest of the dataset, depending on the sign of the CFG. Consequently, **negative CFG values attract the image to the mean of the dataset and repulse it from the mean on DDIM inversion**.
As mentioned in the general response, our intuition is that *the repulsion from the mean of the dataset in the case of negative CFG serves as a regularization for the noise term and allows to compensate for the bad initialization*.
# Comparison with ISM
Please see the general response above for a detailed comparison between our algorithm and ISM.
# Additional Metrics
Thank you for the useful suggestion. We agree that reporting a metric that reflects human preference will make the paper stronger. We adopted the suggested ImageReward. We are using the official implementation and report the rewards directly. Following the same protocol as in Table 1 of the main paper we average the metric across 50 views for each of the 43 prompts. We report the obtained metrics below and plan to augment Table 1 with it.
|Method|ImageReward ↑|
|-|-|
|SDS, 10k steps|-1.51±0.83|
|SJC, 10k steps|-1.76±0.51|
|VSD, 25k steps|-1.17±0.58|
|ESD, 25k steps|-1.20±0.64|
|HIFA, 25k steps|**-1.16±0.69**|
|SDI, 10k steps|-1.18±0.59|
# Additional comparisons
## NFSD
While we compare our method with NFSD qualitatively, we omit quantitative comparison. The official implementation of NFSD is not yet available and we were not able to reproduce its results. We will be happy to add the numerical comparison to Table 1 as soon as an official implementation becomes publicly available.
## Additional qualitative comparisons
In response to more qualitative comparisons apart from the two prompts in the main paper, we would like to kindly point out the numerous qualitative comparisons in the appendix. For this, we follow a protocol adopted across the field: for each baseline, we provide renderings reported in the corresponding papers. To the best of our knowledge, we have reported all prompts and images reported in the corresponding baselines. If considered necessary, we will add visual comparisons on all baselines, using open-source implementations.
# Evaluation of diversity
We agree that additional evaluation of diversity will enhance our paper. Please find examples of generations for different seeds and prompts with our method in Fig.4 of the rebuttal PDF. Our generations exhibit a certain degree of diversity but are mostly the same at a coarse level. We did not notice lower diversity compared to SDS[2] or VSD[3].
Additionally, Fig.6 of the rebuttal PDF illustrates interesting behavior, where different human-related prompts generate similar faces. We are not sure if this is a limitation of the underlying diffusion model or our distillation algorithm and plan to address this in future work.
# Noise in the background of x_t
We agree with the intuition about initial images being out-of-distribution. Indeed, it is rare to find perfectly monotonic backgrounds in natural images. It is unclear if the intermediate $x_t$ is unlikely under the marginal distributions or $x_t$ is close to a mode (in the same way as a completely uniform image has the highest probability density in the i.i.d Gaussian).
In our experiments, however, we did not notice any issues caused by this behavior. Moreover, as we mention in Appendix A.2, we adopt a technique from [4] to fight the Janus problem: we increase the entropy of the process by mixing a small amount of noise during inversion. This helps increase the flexibility of the model to generate non-frontal views. At the same time, it removes the monotonic regions in the noise samples.
[1] Permenter, Frank, and Chenyang Yuan. "Interpreting and Improving Diffusion Models from an Optimization Perspective." arXiv preprint arXiv:2306.04848 (2023).
[2] Poole, Ben, et al. "Dreamfusion: Text-to-3d using 2d diffusion." arXiv preprint arXiv:2209.14988 (2022).
[3] Wang, Zhengyi, et al. "Prolificdreamer: High-fidelity and diverse text-to-3d generation with variational score distillation." Advances in Neural Information Processing Systems 36 (2024).
[4] Peihao Wang et al. “Taming Mode Collapse in Score Distillation for Text-to-3D Generation”. In: arXiv:2401.00909 (2023).
---
Rebuttal Comment 1.1:
Comment: Dear reviewer 4KTy,
Given the limited time for the author-reviewer discussion period, we would appreciate if you could let us know if we fully addressed your comment and if there are any additional questions or clarifications we could add.
Thank you again for your time and effort.
Best regards,
On behalf of all authors | Summary: This paper connects score distillation sampling (SDS) to a DDIM sampling process. The proposed method Score Distillation via Inversion (SDI) replaces the original random noise in SDS with DDIM inversion, and show significantly improved quality compared to SDS and other state-of-the-art prior methods.
Strengths: 1. The connection from SDS to DDIM sampling is interesting. The motivation is also clear that it considers the change of the "sample prediction" variable to guide the 3D generation process. The method is supported by well derived theory (for 2D) and the DDIM inversion is a clever solution.
2. The results generally look good. It shows significant improvement over its baseline method SDS, and is competitive to recent state-of-the-art text-to-3D methods. The experiments are thorough.
3. The proposed method and results could provide insights for future research in the important direction of text-to-3D distillation.
Weaknesses: 1. Overall, the visual results seem to be having a little bit "gray" color/style shift. Is this due to the approximation or some bias in the theory? Is there any hypothesis for theoretical / practical reasons?
2. The ddim inversion and approximation could introduce additional computation cost and training noise.
3. In "ISM as a special case", it seems the main difference of the proposed method and ISM is having the additional text condition. Is there any other key differences in method implementation or theory? More details about the contribution of this work based on ISM could be helpful.
Technical Quality: 4
Clarity: 4
Questions for Authors: Does the proposed method remedy Janus problem, compared to the baseline SDS method? (If yes, why)
Also please check the weakness section.
Confidence: 4
Soundness: 4
Presentation: 4
Contribution: 3
Limitations: The authors discussed limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank r6QS for the helpful review. We will address their comments in the final version and clarify some points below:
# Gray color shift
We agree that results in the main paper might seem gray or sometimes have a red/green tone. We explain this by the dark background color that is generated in the NeRF. Our algorithm is implemented on top of threestudio’s SDS, where apart from the object-level NeRF, the shapes are also allowed to have a background. Typically the generation converges to darker tones of the background, which biases the diffusion process to reflect illumination in the rendered 3D shapes. For visualization in the original submission, we simply cropped the object from the final renderings. When on a white background, this removes the illumination context and makes the shapes look grayer.
To further illustrate this we provide renderings with their original background in Fig.7 of the rebuttal PDF. Additionally, Fig. 1 (bottom row) of the rebuttal PDF shows that when our algorithm is reimplemented in the ISM’s code base, where the background is fixed to be white, our final renderings have brighter, white colors reflecting the global illumination.
In summary, **the diffusion model bakes in the background illumination into the texture of the shape.**
One possible solution that was suggested in prior work consists of randomly changing the background colors to avoid overfitting to a single one. In this case, however, we observed a decrease in the quality of the renderings.
# Computational Cost
We agree that, unfortunately, DDIM inversion requires additional forward passes with the denoiser, which increases computational cost. However, we would like to clarify a few points:
1. Despite the additional cost required for obtaining the noise sample, our algorithm is typically more stable and requires fewer steps to converge than other state-of-the-art methods, all while having similar quality in the results. The time per generation in Table 1 shows that **our algorithm is still 2-3x times faster than state-of-the-art** since it takes fewer steps.
2. Our ablation in Fig.11 demonstrates that 10 steps are enough for DDIM inversion to obtain good-quality 3D generations. In practice, one NeRF update is so slow that after the 10 additional inferences with the denoiser per optimization step we see only a 2x slow-down. This effect will be more noticeable in future work involving Gaussian Splatting, where 3D shape update is not a bottleneck anymore.
# Differences with ISM
For a detailed comparison of our algorithm and ISM, please refer to the general response above and to the rebuttal PDF attached to the response.
# Janus problem
Thank you for the question. Without additional regularization, our algorithm has a stronger tendency towards the Janus problem compared to SDS. SDS uses prompt augmentation to avoid this problem, adding phrases like “front view” and “back view.” As in Appendix A2, our algorithm’s stronger bias toward the Janus problem can be explained by the property of the diffusion models to ignore parts of the prompt when CFG is not high enough (see Fig.3 in the main paper). The proposed improvements of finding a better noise term allow us to reduce the CFG values from 100 in SDS to the typical 7.5, which leads to a weaker view-dependent prompt augmentation.
To additionally prevent the Janus problem, we add entropy as in [1], and orthogonalize the augmented prompts as in [2]. In rare cases, our algorithm still might produce multiple faces in one generation. This limitation is reported and illustrated in the appendix E1.
[1] Peihao Wang et al. “Taming Mode Collapse in Score Distillation for Text-to-3D Generation”. In: arXiv:2401.00909 (2023).
[2] Mohammadreza Armandpour et al. “Re-imagine the negative prompt algorithm: Transform 2d diffusion into 3d, alleviate Janus problem and beyond”. In: arXiv:2304.04968 (2023).
---
Rebuttal Comment 1.1:
Comment: Dear reviewer r6QS,
Thank you again for the thorough review and high evaluation of our work. Could you please let us know if our response fully addressed your comments and if we can add any additional clarifications ? Please note the additional evaluations we added in the PDF attached to the general response.
Best regards,
On behalf of all authors | null | null | Rebuttal 1:
Rebuttal: We thank the reviewers for the detailed and thoughtful feedback.
We are pleased that they appreciate the theoretical contributions and the novelty of our approach (“*well organized and provides valuable intuitions*” (4KTy), “*insights for future research in the important direction*” (r6QS)). The reviewers found our method to be practical (“*significantly improves the quality of 3D shape generation*” (ZvgL)) and well supported by experiments (“*highly valuable and interesting ablation study*” (4KTy), “*experiments are thorough*” (r6QS), “*thorough analysis and convincing ablation*” (ZvgL)).
In response to requests for further clarifications and comparisons, we were able to execute most of the suggested experiments and plan to add them to the final version of the paper. **Please refer to the rebuttal PDF.**
Below, we address common questions or questions providing valuable intuition, while the rest of the comments are addressed in individual replies.
# Comparison with ISM
We agree with reviewers r6QS and 4KTy that Interval Score Matching (ISM) is relevant since it uses DDIM inversion to improve the consistency of 3D guidance. Below we highlight the key differences between our work and ISM.
## Theoretical assumptions
ISM empirically observes that ``pseudo-GT’’ images used to guide Score Distillation Sampling (SDS) are sensitive to their input and that the single-step generation of pseudo-GT yields oversmoothing. From these observations, **starting with SDS guidance**, ISM adds DDIM inversion and multi-step generation to **empirically** improve the stability of the guidance.
As highlighted by the reviewer ZvgL, most advances in score distillation—including ISM—were obtained experimentally. In contrast, our work **starts with 2D diffusion sampling** to rederive score-distillation guidance and motivate improvements. That is, our work formally connects SDS to well-justified techniques in 2D sampling.
## DDIM inversion
In ISM the empirically motivated **DDIM inversion is at the basis of the derivation** of the final update rule. We suggest a general form of the noise term (eq.8), for which **DDIM inversion is just one possible solution**. Our theoretical insights are agnostic to particular algorithms of root-finding, which makes it possible to use more efficient solutions in future research (e.g. train diffusion models as invertible maps to sample noise faster).
## Guidance term
The update rules provided in our eq.10 and ISM’s eq.17 have two main differences:
1. **Full vs. interval noise.** Assuming DDIM inversion finds a perfect root of our stationary equation, ISM’s update rule can take a similar form to ours eq.10 (interval noise is equal to $\kappa$ if eq.8 is satisfied). However, as shown in Fig.9, DDIM inversion does not find a perfect root, and thus the two forms are not equivalent. Our theory shows that the **full noise term is more accurate**. We also show the effect of choosing one term vs. another in Fig.2 of the rebuttal PDF.
2. **Conditional vs. unconditional inversion.** Our derivation hints that the roots of eq.8 are prompt-dependent, motivating our use of conditional DDIM inversion (not used in ISM). Our Fig.10 shows how unconditional inversion yields oversaturation. To demonstrate this effect more clearly, Fig.5 of the rebuttal PDF provides a simple 2D experiment.
## Practical Results
To control for different design choices (GaussianSplattings in ISM vs. NeRF in ours, etc.) **we reimplemented our algorithm in the code base of ISM**, with only the minimal changes discussed in the previous section.
Fig.1 of the rebuttal PDF provides a qualitative comparison using the prompts and settings in ISM’s code. Fig.2 shows the effect of each change made to ISM guidance. Below is the quantitative comparison (ISM code base for both).
|Method|CLIP Score ↑|quality ↑|sharpness ↑|real ↑|ImageReward ↑|Time|VRAM|
|-|-|-|-|-|-|-|-|
|ISM, 5k steps|**28.60±2.03**|0.85±0.02|0.98±0.01|0.98±0.01|-0.52±0.48|45min|15.4GB|
|SDI (ours), 5k steps|28.47±1.29|**0.88±0.03**|**0.99±0.00**|0.98±0.01|**-0.30±0.32**|43min|15.4GB|
## Summary
We bridge the gap between experimentally-based score distillation techniques and theoretically-justified 2d sampling. Both SDS and ISM can be seen as different approaches to finding roots of eq.8., This theoretical insight allows us to modify both ISM and SDS, reducing over-saturation for the first and improving general quality of the second.
# Smaller error in eq.8 = better 3D generation?
In response to reviewer ZvgL, we clarify the intuition behind the relation between 3D generation quality and error in root-finding of the stationary point equation.
A more accurate solution does not necessarily yield a better 3D generation. It is better to say that **a more precise solution of eq.8 makes the guidance of score distillation closer to that of DDIM**. However, other factors contribute to the quality of 3D shape: initialization, view consistency, denoiser’s generative ability, etc. We address this question in more detail in the individual reply to reviewer ZvgL.
# Why negative CFG?
In response to reviewer 4KTy, we add intuition behind using negative CFG. While the general noise term is formally derived, the choice of negative CFG is mostly intuition-based and supported by experiments.
Contrary to a possible impression, negative CFG is not equal to a prompt of an opposite meaning. In fact, **negative CFG values attract the image to the mean of the dataset in the generation process and repulse it from the mean on DDIM inversion.** To support this claim we provide a theoretical and empirical argument in the individual response to reviewer 4KTy.
We notice that the NeRF initialization in SDS is very different from the i.i.d. Gaussian initialization in DDIM. Our intuition is that *negative CFG acts as regularization of the noise due to its mean-repulsion properties*. It brings the bad initialization of the NeRF closer to the Gaussian noise expected by DDIM.
Pdf: /pdf/fea2dcfdc8eff55ce8854dc7a0d7aaa4c3d3633a.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Efficiency of the First-Price Auction in the Autobidding World | Accept (poster) | Summary: This work studies the efficiency of first-price auctions when bidders in the market are all value maximizers or mixed value and utility maximizers. With all value maximizers, the PoA is 1/2, while with partial value maximizers, it is approximately 0.457. The paper also considers the case when the seller has machine-learned reserves on bidders' values, which are approximates of their true values with lower and upper bound guarantees. In this case, the PoA is also given with different approximate guarantees.
Strengths: This paper studies an important problem in the efficiency of first-price auctions in the auto-bidding world after the emergence of results on the tight PoA for traditional utility maximizers given by Jin and Lu, 2022. In this work, tight PoAs are given for the full auto-bidding world with value maximizers. Also, similar results for the mixed world are shown. In this sense, the contributions of this work are solid. Involving machine-learned reserves is also a good idea, and corresponding results are also presented. I think this paper is above the bar of NeurIPS.
Weaknesses: That being said, I still have some minor problems with this work. For example, how is the PoA related to the number of value and utility maximizers in the mixed auto-bidding world? The authors seem only to give the infimum of all PoAs in the mixed world. I think more details hidden behind the PoA can be dug.
Technical Quality: 3
Clarity: 3
Questions for Authors: Please see the weaknesses above. Also, what does $\mathsf{rw}(j)$ mean at the end of Page 4?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors have addressed the limitations of their work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your insightful and encouraging comments.
**PoA parametrized by number of bidders**: this is a great question, and indeed we have given some thoughts to it. In short: any nontrivial refinement (if there is one) beyond the 0.457 bound would require a more sophisticated parametrization, which would necessarily take into consideration parameters other than the numbers of bidders of the two types. The reason is that in the proof of Lemma 6 (the lower bound on the PoA with mixed bidders), we construct a tight instance with only 1 value maximizer and 1 utility maximizer. This means the lower bound of 0.457 holds as long as there exists one bidder of each type, because one can always add dummy bidders of both types who value all items at 0. Therefore, a bound that depends only on the numbers of bidders of the two types would look like: (1) if there is no value maximizer, the bound reduces to prior results for settings with utility maximizers only, (2) if there is no utility maximizer, the bound is 1/2, and (3) otherwise the bound is 0.457.
**rw(j)**: sorry for the confusion. rw(j) here denotes the index of the "rightful winner" in auction j, who has the highest value in auction j and therefore should win in a welfare-maximizing allocation. The notation is introduced in the proof of Theorem 1, which has been moved to Appendix B because of space constraints. We will fix this and define rw(j) in the main paper.
---
Rebuttal Comment 1.1:
Comment: Thanks for your response. Appreciate that! | Summary: This paper studies the price of anarchy of simultaneous first-price auction, where there are $n$ bidders and $m$ auctions/items for sale. The underlying assumption of this paper is that all bidders have a **fixed** valuation. This paper considers 2 types of bidders: 1) a utility-maximizing bidder which maximizes $x(v-p)$, and 2) a value maximize which maximizes the total value of the goods. For value-maximizer only, they show that there is a tight PoA of approximately 1/2. For mixed bidder behavior, they show a tight PoA of approximately 0.457. Finally, they studied how to set a reserve price as a function of the underlying **value**, and how the change of this reserve price affects the PoA of the resulting first price auction with reserve.
Strengths: - This paper studies an important and timely topic of auto bidding, i.e., simultaneous first price.
- This paper has some figure illustrations of the results.
Weaknesses: - The introduction of the paper missed a lot of related work on the related work for auto-bidding, i.e., [1], [2], [3].
- For the model considered in this paper, it is more reasonable to assume that the bidder is unit-demand or has a submodular utility instead of additive utility.
- **The settings in Table 1 are not comparable**:
-- For the 1/2 of full auto bidding in a second-price auction( i.e., [Aggarwal et al., 2019]), they consider the setting where each bidder has a **budget**, and the welfare is defined as liquid welfare instead of sum over-valuations.
-- For the no auto-bidding in the first price auction ( i.e., [Balseiro et al., 2021b]), they consider the **single item** multi-bidder setting.
-- This paper studied the **multi-bidder multi-item** setting, and, since everyone pays what he bids, it's natural and reasonable to assume no overbidding compared to other settings, i.e., second price.
- [line 110] "without generality we assume this threshold is 1". In the related discussion of this sentence in Appendix A, this paper has an **unconventional liquid welfare** definition. In addition, the cited papers all use conventional or **other** versions of liquid welfare:
-- [Aggarwal et al., 2019] use the liquid welfare of the budgeted version as, $\sum_{i \in [n]} \text{min}\{ B_i, \sum_{j} v_{i,j} x_{i ,j } \}$.
-- [Deng et al] use above definition as well.
-- [Balseiro et al., 2021a] doesn't use liquid welfare but first best revenue.
-- [Liaw et al ., 2022] uses $\sum_{i \in [n]} \tau_i \sum_{j \in [m]} x_{i,j} v_{i,j} $. where $\tau_i$ is the ROI for bidder $i$.
-- As a result, the WLOG actually only holds for the liquid welfare specifically defined in this setting.
- For the PoA definition, the paper never defined the equilibrium considered in this paper, Bayesian Nash, or Competitive, or Coarse Correlated.
- Section 5 proposed a reserve price that depends on the **value** but not the bid, which is not observable and hence cannot be applied in practice.
- There is no conclusion of this paper.
Minor Comment:
- line 317, $\{ v_{i,j}\}$ should be $\{ v_{i,j}\}_{i \in [n], j \in [m] }$.
[1]: Gaitonde, J., Li, Y., Light, B., Lucier, B., & Slivkins, A. (2022). Budget pacing in repeated auctions: Regret and efficiency without convergence. ITCS 2023.
[2]: CONITZER, Vincent, et al. Pacing equilibrium in first price auction markets. Management Science, 2022, 68.12: 8515-8535.
[3]: AGGARWAL, Gagan; BADANIDIYURU, Ashwinkumar; MEHTA, Aranyak. Autobidding with constraints. In: Web and Internet Economics: 15th International Conference, WINE 2019, New York, NY, USA, December 10–12, 2019, Proceedings 15. Springer International Publishing, 2019. p. 17-30.
Technical Quality: 2
Clarity: 2
Questions for Authors: Please see the weakness section.
Confidence: 4
Soundness: 2
Presentation: 2
Contribution: 1
Limitations: The paper doesn't explicitly state the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your detailed comments.
**Related work**: we will discuss these results but we do want to note that the third missing citation of Aggarwal et al. mentioned by the reviewer (which is one of the earliest papers on autobidding) is the first entry in our list of references.
**Submodular / unit-demand bidders in autobidding**: while we appreciate the suggestion, we'd like to note that additive bidders are the predominant assumption in the autobidding literature adopted in almost every paper, including the three papers mentioned by the reviewer. It would certainly be interesting to see whether our results extend to richer valuations, such as submodular ones; but we would like to highlight that it is already technically non-trivial to develop PoA bounds for additive bidders in first-price auctions.
**Comparison in table 1**: There seem to be a few inaccuracies in the reviewer's comment. We provide an itemized response below.
- [Aggarwal et al., 2019]: the bound still applies when there is no budget constraint (or equivalently, when each bidder has an infinite budget); and this result is almost folklore now in the autobidding literature.
- [Balseiro et al., 2021b]: the paper in fact studies the multi-bidder multi-item setting.
- No overbidding: the assumption would make the problem trivial. In fact, the only reason a bidder would ever want to underbid is to save budget so they could overbid elsewhere and compete for items that don't "rightfully belong" to them. If we were to assume no overbidding, then a dominant strategy for each bidder would be to bid their true value everywhere, and the resulting PoA would be 1 (i.e., full efficiency). On the other hand, rational overbidding is a reasonable strategy for value maximizers because they don't directly care about the utility. In other words, they are happy to pay more than an item's value as long as that item is worth *something* to them and the overall ROI is large enough. This is precisely what happens in the lower bound constructions in the proofs of Theorem 1 and Lemma 5. We have discussed in detail why no-overbidding makes less sense in the last paragraph in Section 2.
**"Unconventional" liquid welfare definition**: we are not sure we follow this question, since the definitions of liquid welfare mentioned by the reviewer are equivalent to ours. In particular, the version with budget constraints reduces to ours by setting $B_i = \infty$, and the Liaw et al. definition (note that their $T_i$ is our $1 / \tau_i$, which makes no difference because these are input parameters) is precisely ours due to the reasons specified in Appendix A. In general, all the definitions of liquid welfare from the literature as well as our paper share the same characteristic that they capture the maximum achievable revenue while satisfying all advertisers’ constraints.
**PoA definition**: the auction game we consider is perfect-information, and the solution concept we consider is the standard Nash, which we believe are reflected in our definitions in Section 2. We do not consider Bayesian (correlated) equilibria or competitive equilibria in this paper.
**Price in Section 5 depending on value**: this is the common assumption in the literature of algorithms / mechanisms with predictions. The idea is to design algorithms / mechanisms which have a reasonable worst-case guarantee, and whose performance improves as the accuracy of the machine-learning-powered predictions ($\{s_j\}$ in our model) increases. A similar model is also studied in [Balseiro et al., 2021b]. When such predictions are not available, one can always fall back to the standard model that we study in Sections 3 and 4.
---
Rebuttal 2:
Title: Maintain my current score
Comment: I’m a bit surprised by the number of positive reviews this paper has received, especially considering the following factors:
- **This paper did not use the full 9 pages.** The paper could have used the remaining 1/2 page space to add the **missing conclusion**.
- **First price pacing equilibrium has more positive structural properties than the equilibrium studied in this paper**, To further elaborate, budget pacing already guarantees there exists a pacing equilibrium and such equilibrium can be computed or even by learning algorithms efficiently. Especially considering that this paper studies the autobidders, we need to examine whether this equilibrium can be efficiently achieved, i.e., whether an equilibrium exists that can be found efficiently, and whether there is a learning dynamic such that when all bidders use it, an equilibrium can be reached. **However, this paper lacks these important analyses of the equilibrium.**
- **the gaps in the literature it cites**. The paper cites papers w.r.t pacing equilibrium only from Google but **misses a lot of important work from both academia and other companies**, i.e., [1], [2], [3], [5], [6]. **The missed citations and the paper writing give the impression that the paper’s contribution is more significant than it is**. This should be considered when assessing the overall impact of the work.
Here is my response to the author's rebuttal below.
- **Submodular / unit-demand bidders in autobidding**. [3] already studies the first price pacing for general utility, so it is possible to consider the utility model beyond additive.
- **Unconventional" liquid welfare definition.** Liquid welfare is initially (and by convention) defined when the bidder has a **budget constraint,** so the definition of liquid welfare has a max over total value a bidder could have and his own budget, please see [4] as the reference. This paper does not include a budget, which is why the reviewer still believes the liquid welfare discussed is different from the commonly used version.
- **PoA definition**. There is no formal definition of the equilibrium you've considered in this paper. This is a presentation issue.
- **Price in Section 5 depending on value:**: "this is the common assumption in the literature of algorithms / mechanisms with predictions." **Please support this argument with references**. From the reviewer's perspective, however, this reserve method cannot be applied iteratively to obtain the most efficient equilibrium due to strategic issues, see [7]. In this light, **the reserve method would have an insignificant improvement over no-reserve in practice**.
[1] Conitzer, V., Kroer, C., Panigrahi, D., Schrijvers, O., Stier-Moses, N. E., Sodomka, E., & Wilkens, C. A. (2022). Pacing equilibrium in first price auction markets. Management Science, 68(12), 8515-8535.
[2] Gao, Y., & Kroer, C. (2023). Infinite-dimensional fisher markets and tractable fair division. Operations Research, 71(2), 688-707.
[3] Feng, Y., Lucier, B., & Slivkins, A. (2024, June). Strategic Budget Selection in a Competitive Autobidding World. In Proceedings of the 56th Annual ACM Symposium on Theory of Computing (pp. 213-224).
[4] Dobzinski, S., & Leme, R. P. (2014, July). Efficiency guarantees in auctions with budgets. In International Colloquium on Automata, Languages, and Programming (pp. 392-404). Berlin, Heidelberg: Springer Berlin Heidelberg.
[5] Lucier, B., Pattathil, S., Slivkins, A., & Zhang, M. (2024, June). Autobidders with budget and roi constraints: Efficiency, regret, and pacing dynamics. In The Thirty Seventh Annual Conference on Learning Theory (pp. 3642-3643). PMLR.
[6] Golrezaei, N., Jaillet, P., Liang, J. C. N., & Mirrokni, V. (2023, April). Pricing against a budget and roi constrained buyer. In International Conference on Artificial Intelligence and Statistics (pp. 9282-9307). PMLR.
[7] Amin, K., Rostamizadeh, A., & Syed, U. (2013). Learning prices for repeated auctions with strategic buyers. Advances in neural information processing systems, 26. | Summary: The authors study the price of anarchy of first-price auctions where the bidders can be either only autobidders (value maximizers), or a mix of autobidders and traditional bidders (utility maximizers). The setting consist of $n$ bidders and $m$ auctions. Each bidder bids a value $b_{i,j}$ for each auction $j$ and the one with maximum bid, wins the auction and has to pay $b_{i,j}$. Utility maximizers try to maximize the value minus the payment, while value maximizers try to maximize the value, subject to the constraint that the ratio between value and payment is at least a certain threshold. The price of anarchy is the ratio between an equilibrium with worst social welfare (sum of the values) and the optimal social welfare.
The authors prove that in a full autobidding world, (even) when bidders can bid randomly, the price of anarchy is $1/2$. This improves the result of Liaw et al. who prove the price of anarchy in this setting when bidders bid deterministically is $1/2$. More importantly, the authors prove the price of anarchy in the mixed setting with the presence of both autobidders and traditional bidders is $0.457$. They prove a lower bound and give an example with this price of anarchy, and thus matching the lower and upper bound for this setting. Moreover, they prove that if the seller can predict the value of the bidders (using machine-learned advice), it can set reserves to improve the efficiency (price of anarchy). The more precise the advice is, the better the price of anarchy gets in the range of $0.457$ to $1$.
Strengths: - The paper is very well-written and self-contained.
- The addressed setting seems to be a very natural one to consider as also motivated in the paper.
- The results are strong and complete the picture of price of anarchy in the first-price auctions.
- While the proof is not easy and it is novel as the authors claim, it is partitioned into small parts to be more presentable.
Weaknesses: I did not find any major weaknesses.
The paragraph Utility maximizer and value maximizers in page 3 was a bit confusing. In line 109, "at most" should be "at least" I think and in line 111 "at least" should be "at most".
Technical Quality: 3
Clarity: 4
Questions for Authors: I do not have any questions.
Confidence: 3
Soundness: 3
Presentation: 4
Contribution: 4
Limitations: Yes. The authors discuss for which settings the results apply and for which ones they do not apply. It also gives a negative result.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your insightful and encouraging comments.
**Paragraph on page 3**: thanks a lot for pointing this out. You are absolutely right and we will fix the paragraph.
---
Rebuttal Comment 1.1:
Comment: Thanks for the response. I'm happy to keep my score. | Summary: Autobidding is the technique of using optimization algorithms to assign ad slots to bidders while respecting their constraints (e.g., budget, ROI, ROAS, etc.). It generates about 80% of the total online ad revenue for major tech companies, and is therefore quite a significant topic to study. Within this topic, there is the question of "price of anarchy", a notion introduced by Koutsoupias and Papadimitriou, which measures the ratio of the worst welfare in equilibrium to the socially optimal welfare; the smaller this is, the better. Finally, given the recent emergence of the trend of first-price auctions by companies like Google, the question this paper studies is: What is the price of anarchy of running the first-price auction in autobidding?
The most surprising component of their result is that the price of anarchy is the same for first-price and second-price auctions in the fully autobidding world. The key technical hurdle in comparison with second-price auction is that the first-price auction is not truthful for utility maximizers, so uniform bidding isn't the best strategy for value maximizers.
Strengths: -
Weaknesses: I found the details of the paper somewhat difficult to follow, though the authors do seem to have put quite a bit of effort in making the proofs intuitive. (My background isn't in algorithmic game theory so perhaps I'm not really the intended audience for this.)
Technical Quality: 3
Clarity: 3
Questions for Authors: Questions: I'm curious about the comparison with the paper, "Efficiency of Non-Truthful Auctions in Auto-bidding with Budget Constraints" by Liaw,
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: N/A, primarily a theory paper
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your insightful and encouraging comments.
**The Liaw et al. paper**: thank you for pointing us to the paper. This paper is indeed quite relevant and we will discuss it in our paper. In short, they consider a similar setting with autobidders (value maximizers) only, which is close to our "full autobidding" setting. The main difference is that they focus on the effects of budget constraints on top of ROI / ROS constraints. The high-level message there is that using the fractional optimum as the benchmark, first-price auctions become much less efficient with budget constraints, but this efficiency loss can be circumvented if we consider the (weaker) integral optimum as the benchmark.
---
Rebuttal Comment 1.1:
Title: Thank you for your response
Comment: Dear authors,
Thank you for your response. I'm happy to keep my score and advocate for acceptance of your paper. (I'll increase my confidence score to 3).
Best wishes! | null | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Expectile Regularization for Fast and Accurate Training of Neural Optimal Transport | Accept (spotlight) | Summary: - This paper introduces a new regularizer for learning Neural Optimal Transport. The proposed method, called Expectile-Regularized Neural Optimal Transport (ENOT), is based on the expectile regression. ENOT demonstrates competitive results on the Wasserstein-2 benchmark and Unparied Image-to-Image Translation.
Strengths: - The proposed method is easy to implement as it only requires an additional regularizer.
- The ENOT achieves competitive results on the Wasserstein-2 benchmark.
- The ENOT proposes a new regularizer that softly guides the potential $g$ towards $g^{cc}$ using expectile regression.
Weaknesses: - The derivation of the ENOT regularizer (Eq 17) requires further clarification.
- Please see the Questions Section below.
Technical Quality: 2
Clarity: 2
Questions for Authors: - The ENOT and the Monge gap are similar in that they both serve as regularizers for inducing Neural Optimal Transport. Could you provide an additional quantitative comparison between ENOT and Monge gap? Currently, there are only qualitative examples in Fig 1 and 2.
- Table 3 presents the FID results on the Unpaired Image-to-Image Translation task. On the other hand, Table 10 in the appendix includes additional MSE results. For the Unpaired Image-to-Image Translation task, the goal of Neural Optimal Transport is to accurately approximate the target distribution (FID) while minimizing the cost function (MSE). Hence, the MSE results should be included in the main part of the manuscript.
**Typo:**
- Line 220: Table 10 -> 3
- Line 230: Figure 8 -> 4
Confidence: 3
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: - The authors addressed the limitations of their work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1: The ENOT and the Monge gap are similar in that they both serve as regularizers for inducing Neural Optimal Transport. Could you provide an additional quantitative comparison between ENOT and Monge gap? Currently, there are only qualitative examples in Fig 1 and 2.**
We are thankful for this suggestion. We performed an additional comparison with the Monge Gap on $W_2$ benchmark (dataset from Table 2), which can be found below. Given the potential impact of these results on the ultimate algorithm selection for NOT, we propose to add the table to the main text.
| Monge Gap | ENOT | DIM (W2 bench) |
| :------------: | :------------: | :--------------: |
| 0.1 +- 0.0 | 0.02 +- 0.0 | 2 |
| 0.57 +- 0.0 | 0.03 +- 0.0 | 4 |
| 2.05 +- 0.06 | 0.14 +- 0.01 | 8 |
| 4.22 +- 0.1 | 0.24 +- 0.03 | 16 |
| 7.24 +- 0.17 | 0.67 +- 0.02 | 32 |
| 7.99 +- 0.19 | 0.56 +- 0.03 | 64 |
| 9.1 +- 0.29 | 0.3 +- 0.01 | 128 |
| 9.41 +- 0.21 | 0.51 +- 0.02 | 256 |
**Q2: Table 3 presents the FID results on the Unpaired Image-to-Image Translation task. On the other hand, Table 10 in the appendix includes additional MSE results. For the Unpaired Image-to-Image Translation task, the goal of Neural Optimal Transport is to accurately approximate the target distribution (FID) while minimizing the cost function (MSE). Hence, the MSE results should be included in the main part of the manuscript.**
We put the results with the MSE metric in the Appendix, because the majority of baselines we compare against did not include the outcome for those tasks. For convenience, we performed the experiments with the baseline methods and can include them into the main part of the paper:
| Tasks (MSE) | Extr. OT | Ker. OT | ENOT |
| :---------------------: | :--------: | :-------: | :----: |
| Handbags ⇒ Shoes 128 | 0.37 | 0.37 | 0.34 |
| FFHQ ⇒ Comics 128 | 0.22 | 0.21\* | 0.2 |
| CelebA(f) ⇒ Anime 64 | 0.3 | 0.34\* | 0.26 |
| CelebA(f) ⇒ Anime 128 | 0.31\* | 0.36 | 0.28 |
Other issues:
Equation (17): please kindly refer to our response to Reviewer E19z above.
We are also thankful for spotting the typos, which we have corrected.
---
Rebuttal Comment 1.1:
Comment: I appreciate the author for their clarifications and additional experiments. These have been helpful in addressing my concerns. Hence, I will raise my rating to 6.
---
Reply to Comment 1.1.1:
Comment: Thank you for raising the score and for helping us improve the paper. | Summary: The paper introduces ENOT (Expectile-Regularized Neural Optimal Transport), a new method for training Neural Optimal Transport (NOT) models. It improves the efficiency of NOT solvers by incorporating a novel expectile regularization on dual Kantorovich potentials. Empirically, the authors use a Wasserstein-2 benchmark to demonstrate their method's improvement in accuracy and runtime.
Strengths: - The paper is well-organized and clearly written.
- The proposed expectile regularization provides a new way to stabilize the learning of dual potentials. Empirically, this works really well, compared to existing NOT formulations.
- The experimental results are comprehensive.
- The authors were very upfront about the limitations of their approach, e.g., the requirement of extra hyperparameters $\tau$ and $\lambda$.
Weaknesses: To my understanding, while the empirical performance of ENOT is strong, the technical novelty of the approach is limited. Expectile regularization combines the c-conjugate transform (7) and the quantile regression loss (15). ENOT merges two established ideas to produce strong empirical results.
Technical Quality: 3
Clarity: 3
Questions for Authors: The quantile regularization term biases the estimation of the conjugate operator, making it non-exact. Despite this, the authors note that the regularization enables the method to outperform all exact methods. Could the authors speculate on the reasons for this? Is it solely due to the numerical stability of exact methods?
In Section 5.4, the authors mentioned
>Figure 8 presents the study of the impact of the proposed expectile regularization on the L_2^{\text{UV}} metric.
I believe this is a typo. The figure that the authors refer to is Figure 6 in the Appendix.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: See "Weakness".
Nitpicking: the authors may want to consider differentiating the commands \citep and \citet in latex citation and use them accordingly.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We are thankful for raising important questions and for the positive feedback.
First, we would like to address the concerns about novelty:
To the best of our knowledge, ENOT is the first approach that ventures into the approximation of the c-transform by means of expectile regularization. Moreover, ENOT is first to apply this approximation for efficient and accurate solution of Neural OT by completely mitigating additional $c$-conjugate optimization as done in prior works. These features provide at least a 3-fold better and a 10-fold faster training performance against the competitors.
**Q1: The quantile regularization term biases the estimation of the conjugate operator, making it non-exact. Despite this, the authors note that the regularization enables the method to outperform all exact methods. Could the authors speculate on the reasons for this? Is it solely due to the numerical stability of exact methods?**
We acknowledge the need to discuss this and would like to add the following text to the paper. In actuality, the expectile regularization does not introduce an additional bias, neither in theory, nor in practice. The formal convergence for the c-conjugate transform, when $\tau$ converges to 1, is discussed in Appendix D. At the end of training (or upon convergence), we obtain the exact estimate of the $c$-conjugate transformation. Other methods demand near-exact estimation at each optimization step, requiring additional inner optimization and introducing significant overhead. We assume that introduces an imbalance in the simultaneous optimization by $T$ and $g$ in Equation 6, underestimating the OT distance as a result.
**Q2: Section 5.4 typo**
Thank you for careful reading. We have made the correction.
---
Rebuttal 2:
Title: Reponse to rebuttal
Comment: Thank you for your response. I especially appreciate your discussion on Q1. It would be great to see a similar discussion included in the paper.
I've increased my score to 7. However, I must note that although I've used optimal transport in previous work, I don't consider myself an expert in the field. My support for the paper is mainly because of its strong empirical performance. For this reason, I recommend that the area chair give more weight to the opinions of OT experts when making the final decision.
---
Rebuttal Comment 2.1:
Comment: Thank you for your feedback and for the final score. We are pleased to see you appreciate the empirical performance and are available for further conceptual explanations if needed during the next stage of the discussion. | Summary: Authors provided a new, theoretically justified loss in the form of expectile regularisation which stabilize the learning of Neural Optimal Transport. Importantly proposed method outperforms previous state-of-the-art approaches on the established Wasserstein-2 benchmark tasks and image-to-image by a large margin.
Strengths: Originality:
- Expectile regularisation appears to be novel and not previously considered in OT literature.
Clarity:
- Paper is well written and easy to follow.
Significance:
- Paper providing an efficient non max-min solver for computing neural optimal transport maps, which is important.
Weaknesses: Quality:
- Comparison with other methods is focusing mainly on Wasserstein-2 benchmark and image-to-image translation. Comparison with some popular in OT biology problems (see 1, 2) would improve the contribution.
References:
1) TrajectoryNet: A Dynamic Optimal Transport Network for Modeling Cellular Dynamics Alexander Tong, Jessie Huang, Guy Wolf, David van Dijk, Smita Krishnaswamy
2) Light Schrödinger Bridge: Alexander Korotin, Nikita Gushchin, Evgeny Burnaev
Technical Quality: 3
Clarity: 3
Questions for Authors: - The authors consider two training modes depending on the ‘is_bidirectional’ parameter. In what cases the bidirectional training mode works better and what it depends on?
- The algorithm (Algorithm 1 line 2) states that in case 'is_bidirectional' is false $T(x)=\nabla f(x)$. From what it follows?
- Other algorithms that use expectile regression tend to use 0.9 expectile, why in your method you take the expectile parameter closer to 1?
- By what metric can we choose the hyperparameters of $\lambda$ and $\tau$ ?
- Partial OT is often required in practical tasks. It is possible to generalize this method to solve the problem of partial OT?
- L2 cost is probably not the best choice in the image-to-image translation task, can we use any cost, e.g. parameterised by the neural network, in coupling with your method?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 4
Limitations: Limitations are discussed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1: Comparison with other methods is focusing mainly on Wasserstein-2 benchmark and image-to-image translation. Comparison with some popular in OT biology problems (see 1, 2) would improve the contribution.**
We thank the reviewer for the links to these interesting articles and the application of OT in biology. We analyzed the cited works and concluded that our method is not explicitly suitable for solving the problems from these papers, because they require finding highly non-linear maps. But we have made a rough linear estimation on the proposed biological dataset (taken from TrajectoryNet paper) with our method, given in the table below. Despite learning linear maps, ENOT outperforms baseline methods based on score-matching (DSBM) and flow matching (SB-CFM).
| Solver | W1 metric |
| ------- | ------------ |
| ENOT | 0.88 +- 0.03 |
| LightSB | 0.82 +- 0.01 |
| SB-CFM | 1.22 +- 0.4 |
| DSBM | 1.78 +- 0.42 |
**Q2 The authors consider two training modes depending on the ‘is_bidirectional’ parameter. In what cases the bidirectional training mode works better and what it depends on?**
The bidirectional mode works only with strongly convex functions $h(x-y)$. It significantly improves the solution when $T(x)$ is a discontinuous function (as in Figure 7). We propose to mention this in the Discussion Section.
**Q3 The algorithm (Algorithm 1 line 2) states that in case 'is_bidirectional' is false . From what it follows?**
When the parameter is\_bidirectional = False, we can use any parameterization for $T(x)$ by a neural network. The expression $T= \nabla f$ is used to avoid introducing additional variables, but here, we assume **any** function.
**Q4 Other algorithms that use expectile regression tend to use 0.9 expectile, why in your method you take the expectile parameter closer to 1?**
The point is that with $\tau$ converging to 1, we get a more accurate estimate of the c-conjugate transformation. But from the experiments in Figure 4, we can conclude that the values in the range 0.9-1.0 also give a good result.
**Q5 By what metric can we choose the hyperparameters**
Conventionally, one could use L2-UVP metric for the experiments on W2 benchmark (Tables 1,2) and FID metric for the Image-to-Image tasks (Table 3).
**Q6 Partial OT is often required in practical tasks. It is possible to generalize this method to solve the problem of partial OT?**
Our method can be easily adapted for a special case of partial OT, when the whole source measure $\alpha$ and only a part of the target measure $\beta$ are used. Such problem is considered in Gazdieva et al. 2023 (citation from the paper). For the general case, we are currently unable to suggest a proper method of adaptation. Still, if a new method for a neural partial OT is proposed elsewhere, it will also positively benefit from the use of our expectile regularization.
**Q7 L2 cost is probably not the best choice in the image-to-image translation task, can we use any cost, e.g. parameterised by the neural network, in coupling with your method?**
Indeed, the L2 distance between the activations from VGG16 layers can be used as a cost function. We also conducted experiments with it, but it turned out that the ordinary L2 cost gives more accurate results. Perhaps, this stems from the fact that the faces are centered in all images, enabling a direct comparison.
---
Rebuttal Comment 1.1:
Comment: I appreciate the detailed answers and clarifications. I find this paper novel, easy to follow, and of interest to the community. The strengths of the paper significantly outweigh the weaknesses, and I increase my score.
---
Reply to Comment 1.1.1:
Comment: Thank you. We appreciate your suggestions and feedback. | Summary: This paper introduced a new framework for the training of Neural Optimal Transport, a recently emergent paradigm to enable the training of optimal transport plan in larger scale settings. The key contribution lies in the new and novel formulation of a regularized training loss that takes into account an expectile regularization term, which makes the max-min training of the generative NOT more stable. Empirical evaluations demonstrated improvments on both reducing the evaluated metrics and shortening the training runtime.
Strengths: - The authors seem to have done a thorough literature review.
Weaknesses: - Theoretical motivation unclear: lack of references on the instability of finding optimal c-conjugate (i.e. the motivation of this work). Where are the citations for the sentence from line 76 to 78? What does it mean to make the optimization problem more "balanced" with the expectile regression regularization on (line 86)? The same applies when the authors claim expectile regression is a popular option for estimatiing maximum of distributions through neural networks (line 130).
- I suggest the authors rewrite their methodological section to be more compact and to the point (less verbose with the derivations, these can be put in the Appendix). Overall I was getting lost in this section, with the full main training objective never fully written, but have to resort to the pseudocode in algorithm 1 to see what was going on, and what the final loss was being used.
- **Major:** for eq (13) and eq (14) to hold you need both $f$ and $g$ to be convex functions (a la Brenier's theorem) with respect to the input; the only option is to use parameterization with ICNN. However, in table 4-5 and 7 the authors used non-convex MLP as parameterization; this is contradicting with the write up in methodology part. Could the authors elaborate on this?
- Ambiguous report of the empirical evaluations: most of the image task should use standard metric such as the FID. Many of the FIDs for the baselines are missing.
Writing and notation nitpick:
- I don't get the the derivation of the main loss function eq (17). The term in eq (15) is wrt to $\theta$ and a term $y - f_\theta$. What is the equivalence of $y$ and $f_\theta$ for $\mathcal{L}_\tau$ in equation (17) then?
- There should be explanation that we replace the quantity in Eq (7) to (5).
- Optimization and optimisation are used interchangeably. Stick to one of these.
- What is $\hat{\pi}$ right in the beginning of section 3? How are they different from $\pi$?. The same with $\hat{f}$ and $\hat{g}$. In the background section, all of the notations are added with hats $\hat{T}, \hat{\pi}, \hat{f}, \hat{g}$, then these when back to normal in Section 4; are they different problems from the introduction?
Technical Quality: 3
Clarity: 2
Questions for Authors: See weaknesses.
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: See weaknesses.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1: Lack of references on the instability of finding optimal c-conjugate (the motivation of this work). What does it mean to make the optimization problem more "balanced" with the expectile regression regularization on (line 86)? The same applies when the authors claim expectile regression is a popular option for estimating maximum of distributions through neural networks (line 130).**
We are thankful for questions and for suggesting extra citations. We would like to stress that the main motivation behind our work was in finding fast, simple and accurate solver. The instability of finding accurate c-conjugate transform is the main bottleneck in the existing neural OT solvers. Those problems were thoroughly discussed in the following works (refer to Refs in our paper, e.g., Amos 2022a and Korotin et al 2021). For example, Korotin et al 2021 (Section 4.3, page 7) state that "The exact conjugate solver is slow since each optimization step solves a hard subproblem for computing the conjugate. Solvers that approximate the conjugate are also hard to optimize: they either diverge from the start or diverge after converging to nearly-optimal saddle point".
Moreover this claim is empirically supported by the results in Tables 1 and 2 (i.e., W2OT-Objective, W2OT-Cycle diverged in all W2 tasks). Remarkably, our proposed regularization method stabilizes the overall training procedure.
The term "more balanced" here means that we optimize optimal map $T$ and potential $g$ in the OT problem (Equation 6) synchronously and with the same frequency.
Recently, expectile regression was used in some offline Reinforcement Learning algorithms and representation learning approaches. Among them, Implicit Q-Learning (IQL) methods [R1, R2].
[R1] Ilya Kostrikov, Ashvin Nair and Sergey Levine. Offline Reinforcement Learning with Implicit Q-Learning, 2022.
[R2] Dibya Ghosh, Chethan Anand Bhateja, Sergey Levine. Reinforcement Learning from Passive Data via Latent Intentions, 2023.
**Q2 Major: for eq (13) and eq (14) to hold you need both f and g to be convex functions (a la Brenier's theorem) with respect to the input; the only option is to use parameterization with ICNN. However, in table 4-5 and 7 the authors used non-convex MLP as parameterization; this is contradicting with the write up in methodology part. Could the authors elaborate on this?**
In Section (3), the variables with hats ($\hat{\pi}$, $\hat{T}$, $\hat{f}$, $\hat{g}$) correspond to the solution (argmins) of the OT problem (6). The equations (13) and (14) hold under the specified conditions (source and target domains $X$, $Y$ are compact and the measures are absolutely continuous). So there is no strict requirement on $c$-concavity (or convexity); the potentials $f$ and $g$ must merely be the solution to the OT problem (ref. Santambrogio Theorem 1.17). During the training, we also may use the equations (13) and (14) with non-$c$-concave functions $f$ and $g$, because there are no restrictions on $T(x)$ in problem (6) and we can use any representation for the transport mapping function.
Under weaker constraints (for example $X=Y=R^n$), the $c$-concavity of the potentials $f$ and $g$ may be required. In this case, we can rely on a local $c$-concavity in the data concentration region. In addition, if the conditions in equations (13) and (14) are not satisfied, we can use the mode (is\_bidirectional = False), in which we use an arbitrary function for $T(x)$ and do not express it through the potential.
In practice, the is no benefit in using ICNN, because it makes sense only for the squared Euclidean cost, and even in this case the optimization problem deals with additional constraints, becoming more difficult. Similar empirical observations of under-performance of ICNNs were reported in (Amos 2022a).
**Q3: Ambiguous report of the empirical evaluations: most of the image task should use standard metric such as the FID. Many of the FIDs for the baselines are missing.**
Respectfully, the missing values in Table 3 mean that the corresponding experiments were not performed in the baseline publications. Given the suggestion, we performed additional experiments using the official implementation of the OT methods reported by the other authors to fill in the gaps, which we include in a table below and propose to add to our revision.
| Task (FID) | Extr. OT | Ker. OT | ENOT |
| :---------------------: | :--------: | :-------: | :-----: |
| Handbags ⇒ Shoes 128 | 27.1 | 26.7 | 19.19 |
| FFHQ ⇒ Comics 128 | 20.95 | 20.81\* | 17.11 |
| CelebA(f) ⇒ Anime 64 | 14.65 | 18.28\* | 13.12 |
| CelebA(f) ⇒ Anime 128 | 19.44\* | 21.96 | 18.85 |
**Q4: What are $\hat{\pi}, \hat{T}, \hat{f}, \hat{g}$?**
We are sorry for the ambiguity. In Section 3, the hats ($\hat{\pi}, \hat{T}, \hat{f}, \hat{g}$) correspond to the solution of the OT problem in equation 6. As written in the beginning of this section, $\hat{\pi}$ is the **optimal** transport plan, $\hat{T}$ - corresponds to the **optimal** transport mapping, $\hat{f}, \hat{g}$ are the **optimal** Kantorovich potentials. The notation without $\hat{\cdot}$ merely means that the optimization is done over the given variable.
**Q5: Derivation of Equation 17:**
We approximate the maximum of eq. (16) by $\tau$-expectile of $g^T(x) - c(x, y)$ conditioning on $y$. So the target (term $y$ in eq. 15) of the expectile regression here is $g^T(x) - c(x, y)$. The model is then $-g_{\eta}(y)$ with a negative sign as we approximate c-transform of $g^T(x)$ as $\inf_x c(x, y) - g^T(x)$. The corresponding regression loss is: $
L_{\tau} \big(
g^T(x) - c(x, y) + g_\eta(y)
\big).$
Taking definition of $g^T(x)$ from eq. (7), we derive the expression (17):
$L_{\tau} \big(
c(x, T_\theta(x)) - g_\eta(T_\theta(x)) - c(x, y) + g_\eta(y)
\big).$
**Q6: Full Objective:** is defined in lines 168-169. The expression is $L_g(\eta) + L_f(\theta) + \lambda R_g(\eta)$.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for the detailed rebuttal.
However, my major concern (Q2) regarding the soundness of this work remains:
- I am fully understand Brenier's theorem only deals with squared Euclidean cost (W2 distance), but isn't this cost function was what you used in a major part of the experimental section anyway(Section 5.1 and Section 5.3)? This should mean c-concavity of $f$ and $g$ have to hold by enforcing it through the neural network parameterization?
- Moreover, the L2 norm used in Section 5.2 (different cost functionals) violates the condition that h(x - y) must be strictly convex -- you can also verify this by yourself, or check Remark 1.18 in Santambrogio (2015) -- so I am not sure why your experimental results can hold all the case.
---
Rebuttal 2:
Comment: **Q1**: Thank you for requesting this clarification. Yes, as indicated in Sections 5.1 and 5.3, and in most of our experiments, we used squared Euclidean cost. Under Brenier's theorem conditions for the squared Euclidean cost, it holds that $\hat{T}(x) = \nabla \hat{f} (x)$, where $\hat{f}$ is some convex function. It follows that the optimal potential $\hat{f}$ is convex, even if one uses non-convex potentials $f$ in the training process.
Also, to reiterate our initial answer, there are no restrictions on $T(x)$ in Eq. (6) during the training and we can use any representation for this function, including $T(x) = \nabla f(x)$. Therefore, there is no need for $c$-concavity through the neural network parameterization whatsoever, and we do not use it, as it makes the solution worse in practice.
The beauty is that our expectile regularization actually forces functions to be more $c$-concave according to the $c$-concavity criterion (refer to Villani et al. [2009] Proposition 5.8).
**Q2**: We apologize for the ambiguity, which could have been avoided had we explicitly mentioned that we used flag `is_bidirectional=False` (meaning the training mode is one-directional in this example). Thank you for spotting the mismatch. So, in Section 5.2, we do not use (13) and (14) and we do not express $T(x)$ through the potentials, and the convexity of the cost function $h$ is not required. As such, we believe the issue should be resolved and there is no contradiction with Remark 1.18 in Santambrogio (2015). We will definitely add this additional clarification to the text.
Please kindly notice that we do mention it in another part of the main paper (citations below). So, by adding this clarification to Section 5.2 also, we avoid any potential for the same confusion of the future readers of the paper. Thank you.
> Lines 155-157 (Method section): “The transport mapping $T_θ(x)$ has the same parameters as $f_θ(x)$ if it can be expressed through $f_θ$ (ref. 13), or otherwise, when $f_θ$ is not used (one-directional training), it is its own parameters.”
> Lines 204-205 (Section 5.2): “we parametrize the map Tθ as a MLP”
---
Rebuttal Comment 2.1:
Comment: Thank you for the reply. Since my major concern is resolved, I'm raising the score towards acceptance. I would however expect the authors to better phrase their implementation of the experiments as they have promised in the rebuttal and response to my review.
---
Reply to Comment 2.1.1:
Comment: Thank you for your input and for raising the score. We commit to implement all the changes in the revised version of the paper. | Rebuttal 1:
Rebuttal: We thank our reviewers for their constructive feedback. In the attached PDF, we provide additional experiments requested by the reviewers, showcasing the performance of ENOT. Corresponding tables are duplicated within individual responses to each reviewer.
Pdf: /pdf/9f47c11ab2783716848c618c0a05277522809bf4.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | Summary: This work tackles the problem of estimating the dual Kantorovich OT potentials
parametrized via neural networks. They propose to approximate conjugate operator
using a well motivated loss called expectile regularization which approximates a conditional
maximum operator, well suited for estimating the c-transform. The authors illustrate the
performance of their method against several benchmarks showcasing a significant improvement
in both performance and runtime.
Strengths: - Very simple, novel and intuitive idea with important and significant impact
- Thoroughly carried out experiments and evaluation
Weaknesses: - Little analysis of the drawbacks of the proposed method as well as the effect of setting
the threhshold parameter tau (see Q1-2 below)
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. The expectile loss is well suited to approximate the c-transform only if tau
is large enough (close to 1). Therefore, it would make sense to see a significant
drop in performance when tau is close to 0.5. It would be have been nice to validate
this intuition in figure 4 for instance.
2. I would expect this to lead to some form of tradeoff. Letting tau->1 should lead to
some form of instability and/or slower convergence ? Is that the case ? How difficult
is this tradeoff to settle in practice ?
3. The fitted maps of Sinkhorn in fig1 and fig2 are odd, I suspect the entropic regularization
was very high, leading to a very very smooth transportation plan from which the fitted map
was obtained by taking the maximum over values very close to each other.
- L74: is *squared
- L137: the* proposed regularization approach
- L160: the* proposed
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: none
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1: The expectile loss is well suited to approximate the c-transform only if tau is large enough (close to $1$). Therefore, it would make sense to see a significant drop in performance when tau is close to $0.5$. It would be have been nice to validate this intuition in figure 4 for instance.**
Thank you for raising this question. To characterize ENOT performance as a function of expectile $\tau$, we performed additional evaluation with $\tau$ ranging from $0.5$ to $0.999$ on **1)** image-to-image translation dataset (CelebA(female) $\Rightarrow$ Anime with dim=$64$, Table $3$), **2)** $W_2$ benchmark with dim=$256$ (Table $2$), and **3)** 2D dataset from Figure $7$ (second row). In these experiments, we indeed observe a significant drop in performance when $\tau$ approaches $0.5$ ($0.7, 0.6, 0.5$) on the 2D dataset and CelebA (female) $\Rightarrow$ Anime with dim=$64$ (in terms of FID). On $W_2$ benchmark, the tendency is less evident. At the same time, values of $\tau$ in the range $\left[0.90, 1.0\right]$ always demonstrate convergence of ENOT, giving good results in all experiments. We will reflect on these new experiments in the revised manuscript.
| Expectile ($\tau$) | *$\mathcal{L}_2 ^\text{UV}$* (dim=$256$) | $W_2$ - 2D | FID (CelebA(f) -> Anime) | MSE (CelebA(f) -> Anime) |
| :---------: | :----------------: | :--------------: | :------------------------:| :------------------------: |
| 0.5 | 0.55 | 33.47 | 16.43 | 0.264 |
| 0.6 | 0.52 | 12.63 | 16.28 | 0.26 |
| 0.7 | 0.51 | 9.59 | 15.95 | 0.265 |
| 0.8 | 0.49 | 1.4 | 15.19 | 0.262 |
| 0.9 | 0.5 | 0.06 | 13.87 | 0.266 |
| 0.95 | 0.54 | 0.03 | 14.27 | 0.267 |
| 0.999 | 0.55 | 0.02 | 13.91 | 0.288 |
**Q2: I would expect this to lead to some form of tradeoff. Letting tau->1 should lead to some form of instability and/or slower convergence ? Is that the case ? How difficult is this tradeoff to settle in practice ?**
Setting $\tau \simeq 1$ may indeed cause an instability. This can be the case because, under certain conditions, the overall contribution of proposed regularization term will be zero, which means that the potentials can become unbounded. However, in our experiments, such an instability occurred extremely rarely (mostly due to bad optimizer parameters), resulting only in a slight drop in performance. Yet, we carefully show that the optimal $\tau$ is usually in the range $\left[0.9, 1.0\right]$, with different values in this range yielding approximately the same result; so, selecting the optimal value of $\tau$ becomes simple.
**Q3: The fitted maps of Sinkhorn in fig1 and fig2 are odd, I suspect the entropic regularization was very high, leading to a very very smooth transportation plan from which the fitted map was obtained by taking the maximum over values very close to each other.**
Thank you for pointing this out. In order to be consistent with the experiments from the Monge gap paper (Uscidda and Cuturi [2023]), we used their exact default parameters of the entropic regularization, namely $\epsilon = 0.01$ $\cdot$ mean(C), with C being the cost matrix (in Figure 1, $\epsilon \sim 0.12)$, replicating the 'odd' look from the original paper. Moreover, we observe that lowering the values of $\epsilon$ produces plots of similar transportation smoothness. | null | null | null | null | null | null |
WATT: Weight Average Test Time Adaptation of CLIP | Accept (poster) | Summary: The authors introduce a Test-Time Adaptation technique named Weight Average Test-Time Adaptation (WATT) for Vision-Language Models (VLMs) such as CLIP. WATT is the proposed method that improves test-time adaptation by borrowing ideas from different text prompt templates to develop pseudo-labels for model updates and also carrying out weight averaging so as to consolidate learned information from those generated labels. The approach is tested across different datasets, revealing its capacity to boost performance without any further metamorphosis of the model or additional trainable modules.
Strengths: + The novelty of the study is that it proposes a novel test-time approach. It involves averaging weights while using varying text prompts which deviates from the conventional TTA techniques— making it a high-quality extension to existing TTA methods.
+ It ensures quality by using rigorous experimental setup: involving thorough evaluations on various datasets, with good performance demonstrated by improvements over the current leading techniques.
+ It maintains clarity in its presentation: both of methodology and results. This has been made possible by visual aids and detailed elaborations that ensure better understanding, thus though we still need to work more on this document to achieve perfection.
+ The importance of the proposed method lies in its ability to adapt using a single image without requiring any further changes to the model; this level of adaptability is highly significant for real-world applications across different fields.
Weaknesses: - Limited Scope of Evaluation: State-of-the-art Methods [1, 2] published in recent CVPR/ICCV are missing for comparing in Experimental part and discussing in the Related Work section.
[1] Diverse data augmentation with diffusions for effective test-time prompt tuning. ICCV, 2023.
[2] Efficient Test-Time Adaptation of Vision-Language Models. CVPR, 2024.
- Computational Cost: The paper mentions the efficiency of WATT but does not provide a detailed comparison of computational costs with other TTA methods, which could be crucial for practical deployment.
- Generalization to Other Tasks: The paper focuses on image classification tasks. Extending the approach to other vision tasks like segmentation or detection and evaluating its performance there could provide a more comprehensive understanding of its applicability.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Could you provide a more detailed comparison of the computational cost of WATT compared to other TTA methods?
2. Have you considered evaluating WATT on real-world datasets with more complex domain shifts?
3. How would WATT perform on other vision tasks such as segmentation or object detection?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: The authors have addressed the limitations and potential negative societal impacts adequately. They highlight the focus on image classification and suggest future work extending the method to other tasks. However, they could discuss more on the computational resource requirements and scalability in practical deployments.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ***Q:* Limited Scope of Evaluation.**
*A:* Thank you for your valuable feedback regarding the scope of our evaluation. We appreciate the recommendation to include comparisons with more recent state-of-the-art methods in both the Experimental section and the Related Work section. Specifically, we have taken note of the methods mentioned: DiffTPT [Diverse Data Augmentation with Diffusions for Effective Test-Time Prompt Tuning, ICCV 2023] and TDA [Efficient Test-Time Adaptation of Vision-Language Models, CVPR 2024]. Therefore, we compared them with the TTA method mentioned in our paper and SAR as noted by Reviewer y4Me. The results are summarized in the table below:
| Dataset | **CLIP** | **TENT** | **TPT** | **CLIPArTT** | **TDA** | **DiffTPT** | **SAR** | **WATT-P** | **WATT-S** |
| :------------ | :------- | :------- | :------ | :----------- | :------ | :---------- | :------ | :--------- | :--------- |
| **CIFAR-10** | 88.74 | 91.69 ± 0.10 | 88.06 ± 0.06 | 90.04 ± 0.13 | 84.09 ± 0.04 | 83.07 ± 0.05 | 89.05 ± 0.02 | 91.41 ± 0.17 | 91.05 ± 0.06 |
| **CIFAR-10.1**| 83.25 | 87.60 ± 0.45 | 81.80 ± 0.27 | 86.35 ± 0.27 | 78.98 ± 0.37 | 76.50 ± 0.29 | 83.65 ± 0.04 | 87.78 ± 0.05 | 86.98 ± 0.31 |
| **CIFAR-10-C**| 59.22 | 67.56 | 56.80 | 71.17 | 48.00 | 56.77 | 60.45 | 72.83 | 73.82 |
| **CIFAR-100** | 61.68 | 69.74 ± 0.16 | 63.78 ± 0.28 | 69.79 ± 0.04 | 60.32 ± 0.06 | 52.80 ± 0.08 | 64.44 ± 0.01 | 70.38 ± 0.14 | 70.74 ± 0.20 |
| **CIFAR-100-C**| 29.43 | 35.19 | 30.46 | 41.51 | 22.08 | 22.89 | 31.92 | 44.68 | 45.57 |
According to the TDA supplementary materials, we selected the weighting factor alpha as 5.0 and the sharpness ratio beta as 2.0, which are stated as optimal. However, these values did not appear to be the best choice for more challenging datasets like CIFAR-10-C or CIFAR-100-C, which contain various corruptions. Adjusting these parameters based on the dataset would not be consistent with the principles of a fully TTA method, which might explain their suboptimal performance in our results.
Regarding DiffTPT, it generates 64 images per test image, making it challenging to use in a real-world TTA scenario. Similar to TDA, DiffTPT requires carefully chosen parameters to fit the dataset, whereas our method does not require dataset-specific tuning. This highlights the robustness and practicality of our approach in diverse real-world applications.
***Q:* Computational Cost.**
*A:* To address the reviewer's concern regarding the computational cost comparison of WATT with other TTA methods, we have conducted a thorough evaluation under consistent conditions using an NVIDIA A6000 GPU within the same Python environment. The table provided compares the adaptation time, memory usage, and the number of learnable parameters for various TTA methods, including our proposed WATT method.The table clearly demonstrates that WATT-S, a sequential implementation of WATT, maintains competitive adaptation time and memory usage compared to other methods like TENT and ClipArTT, which are efficient but lack the robustness of WATT's method. Additionally, the table highlights that WATT-P, with parallel model training, offers a faster adaptation time than WATT-P with a for-loop implementation, albeit at the cost of higher memory usage.
It's important to note that methods like DiffTPT and MEMO, which show significantly higher adaptation times, employ off-the-shelf diffusion models and AugMix augmentation, respectively, resulting in time-consuming processes that may be impractical for real-world scenarios. In contrast, the effectiveness of our WATT-S method makes it better suited for scenarios where a robust, rapid, and resource-efficient adaptation is crucial.
| Method | **Adaptation Time** | **GPU Memory** | **Percentage of Learnable Parameters** |
| :--------- | :------------------ | :-------------- | :------------------------- |
| **TENT** | 0.3 sec. | 1.5 GB | 0.026% |
| **ClipArTT**| 0.6 sec. | 1.7 GB | 0.026% |
| **SAR** | 0.4 sec. | 1.4 GB | 0.026% |
| **MEMO** | 165 sec. | 2 GB | 0.026% |
| **DiffTPT**| 8.5 sec. | 10 GB | 0.001% |
| **WATT-P** | 23.2 sec. | 1.5 GB | 0.026% |
| **WATT-P (concurrent)** | 2.9 sec. | 10 GB | 0.208% |
| **WATT-S** | 2.3 sec. | 1.5 GB | 0.026% |
We noticed that we used the word "efficiency" in lines 105 and 382. We apologize for this and should have used "effectively" instead.
***Q:* Generalization to Other Tasks.**
*A:* While our current work focuses on image classification, exploring the applicability of WATT to tasks such as segmentation and object detection is indeed an interesting direction for future research. It is worth noting that many recent TTA papers also primarily focus on classification tasks. Extending to other tasks would require tailored experimental settings and potentially different methodological adjustments (e.g., CLIP cannot be used directly for segmentation), which we believe are beyond the scope of this paper. We appreciate your suggestion and consider it a valuable avenue for further investigation.
***Q:* Have you considered evaluating WATT on real-world datasets with more complex domain shifts?**
*A:* Due to limited space, please refer to the global rebuttal for a general explanation of the datasets we used.
---
Rebuttal 2:
Comment: Thank you for thoroughly addressing my concerns. After reviewing the comments and responses for the other reviewers, I see that their concerns have also been resolved. The authors have provided clear definitions of terms for better understanding, conducted additional experiments to further evaluate the effectiveness of the proposed method, and offered more in-depth analysis of how the proposed method works in various settings.
Overall, the rebuttal enhances my confidence in this paper. With careful consideration, I believe this paper with revision is worthy of NeurIPS and will significantly impact the test-time adaptation field. My final decision is “strong accept.”
---
Rebuttal Comment 2.1:
Comment: Thank you for your thorough review and constructive feedback throughout this process. We greatly appreciate your positive evaluation and are pleased that our revisions have addressed your concerns. | Summary: This paper proposes a test-time adaptation (TTA) method for CLIP by integrating various textual prompt templates into Weight Average (WA) methods (Ref [11], [22]). Experiments on multiple types of domain shifts show the effectiveness of the proposed method.
Strengths: - The proposed method is effective yet simple.
- The experiments span different domain shift scenarios and demonstrate better performance of the method.
Weaknesses: - The second contribution at the end of Section 1 lacks sufficient evidence. Table 4 only shows that the proposed method can handle small batch sizes without comparison with others. Additionally, there are existing TTA works studying small batch size scenarios. (SAR [Towards Stable Test-Time Adaptation in Dynamic Wild World, 2023], MEMO [MEMO: Test Time Robustness via Adaptation and Augmentation, 2022], etc.). Recent diffusion model-based TTA methods (such as DDA [Back to the Source: Diffusion-Driven Test-Time Adaptation, 2023]) also show robustness for small batch sizes.
- The technical contribution seems not that significant. The core idea is introducing multiple prompt templates into two previous WA methods, to my understanding.
Technical Quality: 3
Clarity: 3
Questions for Authors: Is the evaluation done online (like the setting in the TENT paper where inference and adaptation happen on each batch in data streams), or does the model adapt with a single batch and then perform inference on the whole test set?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors mention limitations in the checklist.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ***Q:* Comparison with other methods and a batch size of 1.**
*A:* Thank you for your insightful feedback. We appreciate your comments regarding the need for more comprehensive evidence to support our second contribution in Section 1. In response, we have tested the TTA methods mentioned in our paper with a batch size of 1. Additionally, we implemented the comparisons you suggested and adapted SAR [Towards Stable Test-Time Adaptation in Dynamic Wild World, 2023] and MEMO [MEMO: Test Time Robustness via Adaptation and Augmentation, 2022] to CLIP.
We decided not to implement DDA [Back to the Source: Diffusion-Driven Test-Time Adaptation, 2023]. While it is an excellent article, its benchmarks differ from ours. In DDA, the input is adapted rather than the encoder, and it aligns more with test-time training than fully test-time adaptation, as it requires the source dataset to train the diffusion model.
The results with a batch size of 1 are summarized in the table below:
| Dataset | **CLIP** | **TPT** | **CLIPARTT** | **SAR** | **MEMO** | **WATT-S** |
| :------------ | :------- | :------ | :----------- | :------ | :------- | :--------- |
| **CIFAR-10** | 88.74 | 88.29 | 88.76 | 87.41 | 89.29 | 89.87 |
| **CIFAR 10.1**| 83.25 | 82.85 | 83.15 | 82.32 | 83.80 | 84.55 |
| **CIFAR-10-C**| 59.22 | 59.03 | 59.18 | 58.70 | 61.15 | 61.26 |
As can be seen, WATT-S achieves the highest accuracy in all cases. On CIFAR-10.1, it outperforms SAR by 2.23% and MEMO by 0.75%. As highlighted in our paper, this improvement is achieved without any image augmentation, a common practice in previous TTA approaches working with small batches.
***Q:* The technical contribution seems not that significant. The core idea is introducing multiple prompt templates into two previous WA methods, to my understanding.**
*A:* Our work introduces the use of multiple prompt templates in weight averaging (WA) methods, which, to our knowledge, is a novel approach in the TTA field. As noted by Reviewer HDdi, this approach deviates from conventional TTA techniques and offers a valuable extension to existing methods. We have also conducted a comparative analysis of WA with other common ensembling methods to highlight its effectiveness.
Moreover, our paper is the first to demonstrate performance fluctuations due to using different text templates. Recognizing this, we leveraged these fluctuations as a benefit to enhance WA, whereas previous articles focused on using image augmentations or varying hyperparameters in the training time.
***Q:* Is the evaluation done online (like the setting in the TENT paper where inference and adaptation happen on each batch in data streams), or does the model adapt with a single batch and then perform inference on the whole test set?**
*A:* In the context of the TENT paper, the model adapts to a given batch, and updates its parameters from the most recent adaptation when a new batch is introduced. In this framework, adaptation and inference occur within the same batch, similar to TENT. However, upon introducing a new batch, the model reverts to the standard CLIP configuration before making the adaptation.
---
Rebuttal Comment 1.1:
Comment: Thank you for the additional experiments and explanations. The experiments have addressed my main concern, so I have raised my score from 5 to 6. However, I still maintain that the technical contribution is not particularly significant. | Summary: This paper proposes a method called Weight Average Test-Time Adaptation (WATT) to improve test-time adaptation (TTA) for the CLIP model. The core idea is to use different text templates to construct multiple text prompts and adapt the model weights using these different prompts. During the evaluation stage, the prompts provide predictions based on the adapted weights and text embedding averaging across multiple prompts. The authors conduct comprehensive experiments that demonstrate the generalization and strong performance of the WATT method.
Strengths: * The idea is simple but works effectively.
* Comprehensive experiments support the effectiveness of this method in the following aspects:
Robustness, as reflected in Table 6.
Efficient adaptation, as reflected in Table 4.
Generalization and state-of-the-art comparison, as shown in Table 7.
Weaknesses: There are no clear weaknesses, but I do have some questions and potential limitations that do not affect my rating.
Technical Quality: 4
Clarity: 3
Questions for Authors: * Is it possible to combine text prompt augmentation (this method) with TTA strategies that use image augmentation to achieve better results?
Some templates may be incorrect. For example, as shown in Table 2, prompt T0: "a photo of a {class k}" and prompt T2: "a bad photo of the {class k}" refer to the same image, but why is this photo considered bad? If you add distortion or corruption, then it might be appropriate to call it bad. Also, for T6: "art of the {class k}," if you use diffusion to generate an art image of the original image, then the prompt would match.
* Can you measure the similarity score of these text templates in the CLIP text encoder space?
Confidence: 3
Soundness: 4
Presentation: 3
Contribution: 3
Limitations: Can the authors test whether this method can also improve the latest VLM, such as SigLIP?
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ***Q:* Is it possible to combine text prompt augmentation (this method) with TTA strategies that use image augmentation to achieve better results? Some templates may be incorrect. For example, as shown in Table 2, prompt T0: "a photo of a {class k}" and prompt T2: "a bad photo of the {class k}" refer to the same image, but why is this photo considered bad? If you add distortion or corruption, then it might be appropriate to call it bad. Also, for T6: "art of the {class k}," if you use diffusion to generate an art image of the original image, then the prompt would match.**
*A:* To clarify how we chose the templates for weight averaging: The CLIP paper identifies 80 templates that enhance model robustness and performance. They ultimately conclude that 7 of these templates best summarize their model (see https://github.com/openai/CLIP/blob/main/notebooks/Interacting_with_CLIP.ipynb). In our work, we use these 7 generic templates and add the common one, “a photo of {},” based on their optimization. These templates are not specifically linked to the content of the images. However, linking these text templates to the images through augmentation or generation is a promising idea for future work. Thank you for the suggestion.
***Q:* Can you measure the similarity score of these text templates in the CLIP text encoder space?**
*A:* As suggested, we computed the similarity between the text templates for each class and averaged them into a single matrix, which we have included in the PDF.
The similarity between different templates for the same class can vary. This highlights that utilizing diverse templates, despite their individual variations, provides a richer set of information that significantly enhances the model's performance.
***Q:* Can the authors test whether this method can also improve the latest VLM, such as SigLIP?**
*A:* Thank you for your suggestion! We have incorporated SigLip into our code and conducted a comparative analysis with our Sequential method (WATT-S) on different datasets. The results are summarized in the table below:
| Dataset | **SigLip** | **WATT-S** |
| :-------------------- | :-------------------- | :-------------------- |
| **CIFAR-10** | 66.35 | 75.02 ± 0.05|
| **CIFAR-10.1** | 57.30 | 65.87 ± 0.21|
| **CIFAR-10-C** | 37.52 | 45.29 ± 0.13|
| **CIFAR-100** | 33.97 | 65.87 ± 0.21|
| **CIFAR-100-C** | 14.43 | 20.05 ± 0.05|
Our method, when applied to SigLip, shows significant improvements across all datasets, highlighting the effectiveness of WATT in enhancing performance.
---
Rebuttal Comment 1.1:
Title: Reply to authors
Comment: Thank you for your rebuttal. All my concerns have been addressed. The results reflected in SigLip are very positive, so I have increased my score.
---
Reply to Comment 1.1.1:
Comment: We're grateful for your detailed feedback and pleased that our response effectively addressed your concerns. The improvement with SigLip is encouraging, and we sincerely thank you for suggesting that comparison. Your input has been invaluable in strengthening our work. | null | null | Rebuttal 1:
Rebuttal: We greatly appreciate the reviewers' insightful and constructive comments and are pleased to note that all three reviewers voted towards acceptance. We are also encouraged by the feedback highlighting the robustness and generalizability of our method (Reviewer wUmo), its superior performance across various domains (Reviewer y4Me), and the rigor of our experiments (Reviewer HDdi).
Below, we address all the questions raised by the reviewers, including additional baseline evaluations and clarifications as requested. For Reviewer wUmo’s convenience, we have included additional plots and results in the updated PDF.
---
### **Continuation of Answer to Reviewer HDdi:**
***Q:* Have you considered evaluating WATT on real-world datasets with more complex domain shifts?**
*A:* Thank you for this remark suggesting that a better explanation of the datasets should be added to our supplementary materials. In our investigation, we use the VisDA-C dataset, which challenges models with two distinct domain shifts: the simulated shift and the video shift. The simulated shift includes 152,397 3D-rendered images across 12 diverse classes, while the video shift comprises 72,372 YouTube video frames spanning the same categories. This dataset addresses the diversity of imagery types applicable to a model, posing a significant challenge.
Moreover, we evaluate our proposed method on three other datasets: PACS, VLCS, and OfficeHome. These datasets help understand various domain shifts, including texture and style variations. The PACS dataset consists of 9,991 images across four domains (Art, Cartoons, Photos, Sketches) and seven classes. The VLCS dataset contains 10,729 images across four domains (Caltech101, LabelMe, SUN09, VOC2007) and five classes. Lastly, the OfficeHome dataset includes 15,588 images across four domains (Art, Clipart, Product, Real) and 65 classes. Evaluating across these distinct domain shifts showcases the generalizability of our method.
These datasets are more representative of real-world scenarios compared to CIFAR, with complex domain shifts.
Pdf: /pdf/c547519f4e795b3cece31a89f878c43a2b378f50.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Bias Detection via Signaling | Accept (poster) | Summary: The paper studies bias detection in the Bayesian persuasion model. In particular the paper wants to test for a threshold that represents the resistance of the receiver to change its mind and update their prior. The test is performed trough signalling schemes and observe the action performed by the receiver. The paper is concerned with finding the signalling scheme that minimizes the worst case expectation of the needed number of queries. The problem is solved by geometrical techniques.
Strengths: The paper is well written and fairly easy to follow. Moreover it provides interesting answers to interesting questions. The answers goes trough geometrical characterisation and the paper helps build the intuition required for understanding the main results.
Weaknesses: The main weakness in my opinion is the definition of the baseline and its poor understanding in the larger context of PAC learning.
Why the baseline is defined as such rather then considering the minimum of expected samples needed to understand if \omega\ge\tau-\epsilon or \omega \le \tau+\epsilon with probability at least 1-\delta? This is a key technical point and, while negative results such as lemma 4.4 are expected with \epsilon=0, they are not expected to hold with \epsilon>0. For example for (\epsilon,\delta)-PAC learning for best arm identification in multi armed bandit has infinite samples complexity with \epsilon=0 even in trivial cases such as all the arms have the same mean.
Technical Quality: 3
Clarity: 3
Questions for Authors: see weaknesses
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > The main weakness in my opinion is the definition of the baseline and its poor understanding in the larger context of PAC learning. Why the baseline is defined as such rather than considering the minimum of expected samples needed to understand if $\omega \ge \tau - \epsilon$ or $\omega \le \tau + \epsilon$ with probability at least $1 - \delta$? This is a key technical point and, while negative results such as lemma 4.4 are expected with $\epsilon = 0$, they are not expected to hold with $\epsilon > 0$. For example, $(\epsilon, \delta)$-PAC learning for best arm identification in multi-armed bandit has infinite sample complexity with $\epsilon = 0$ even in trivial cases such as all the arms have the same mean.
This is an interesting point worth clarifying, but we disagree that it is a weakness and believe we can provide an effective rebuttal.
We first note that it isn't completely clear whether you're interested in the problem of testing whether $\omega\in [\tau-\epsilon,\tau+\epsilon]$ (in which case "if" and "or" should be replaced with "and") or the "gap" problem of testing if $\omega \le \tau - \epsilon$ or $\omega \ge \tau + \epsilon$ (in which case the inequalities are flipped). We assume you have the former problem in mind, as it is more consistent with the PAC setting.
Next, the problem of testing whether $\omega\in [\tau-\epsilon,\tau+\epsilon]$ involves testing whether $\omega\leq \tau+\epsilon$ *and* $\omega\geq \tau-\epsilon$, that is, it is equivalent to two instances of the exact threshold problem. This may seem unintuitive, and you may wonder whether testing other thresholds in this interval might lead to better sample complexity; that is not the case, though, because the choice of bias level is adversarial (in the same way that testing a threshold $\tau$ cannot be sped up by testing another threshold $\tau'\neq \tau$).
Finally, we argue that there is no reason to include a confidence parameter. Unlike standard PAC problems (where there is some underlying distribution), the only source of randomness in our problem is the signaling scheme. Recall, also, that we show that the optimal scheme simply maximizes the probability of useful signals. If that probability is $p$, the optimal scheme needs $1/p$ samples in expectation. You could ask instead about the number of samples $t$ needed for a success probability of $1-\delta$, but given this discussion, the answer is clear: $(1-p)^t\leq \delta$, which is satisfied when $t \ge \frac{1}{p} \log \frac{1}{\delta}$.
Your PAC intuition may hold true if, analogously to a stochastic multi-armed bandit problem, the bias $\omega$ is drawn from a distribution, but that's a fundamentally different model (in the same way that stochastic bandits are fundamentally different from adversarial bandits).
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their reply. I understand now better the relationship with PAC learning. I believe that this discussion should be expanded and included into the paper, as also other reviewers seemed to have similar questions. I confirm my evaluation
---
Reply to Comment 1.1.1:
Comment: Thank you for your response. We will gladly expand this discussion and include it in the paper. | Summary: This paper studies the problem of determining the bias level of an agent in updating their beliefs using signaling schemes. Specifically, they detect to what degree, the agent is updating their beliefs biased towards their own prior, or 'correctly' according to the Bayesian rule. They propose a signaling scheme to detect whether the bias level is below or above a given threshold $\tau$. The core of their algorithm design is the revelation principle, which converts to a linear program subjecting to optimality, indifference, and probability distribution constraints.
Strengths: The paper is in general well-written and smooth to follow. The problem formulation and scheme design intuitions are nicely explained. The problem studied is novel. The proofs seem to be theoretically sound.
Weaknesses: Although the studied problem is novel, it lacks motivation and application in the real world. The authors motivate it by connecting bias to disagreement and polarization, but they don't provide any detailed discussion. Also, currently the authors assume the principal knows the agent's prior. Then it makes no sense to "discount the opinions of biased agents to improve collective decision making" because the principal knows enough information and can solve the optimal decision solely.
Technical Quality: 3
Clarity: 3
Questions for Authors: The problem studied in this paper is determining whether the bias is below or above a given threshold $\tau$. A closely related, and probably more common problem is directly estimating the bias level using signaling. Can the authors share their thoughts on this? For example, what's the main difficulty of this new problem? Do the authors believe (variants of) constant schemes are still optimal?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: I don't see any limitations or potential negative societal impact of this work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > Although the studied problem is novel, it lacks motivation and application in the real world. The authors motivate it by connecting bias to disagreement and polarization, but they don't provide any detailed discussion. Also, currently the authors assume the principal knows the agent's prior. Then it makes no sense to "discount the opinions of biased agents to improve collective decision making" because the principal knows enough information and can solve the optimal decision solely.
Your point is well taken that expanding our motivation would be beneficial, and we plan to do so in the revision.
We disagree with the specific criticism in your comment, however; in particular, we aren't sure what you have in mind when you refer to the "optimal decision," as this is not part of our model. One way to motivate our setup is to view it as the initial interaction between a decision maker and an expert before any future solicitation of information from the expert for decisions of interest. The goal of this interaction — which can be structured as an artificial game, for example — is to estimate the expert's bias so that the expert's future opinions can be appropriately adjusted, improved or even discarded. This is conceptually analogous to the design of interactions to measure individual risk attitudes before giving subjects decision-making tasks, an approach common to economists and psychologists. Note that here the decision maker only has an informational advantage in the initial interaction.
To better connect our problem to polarization, we note that one can make a distinction between two sources of polarization [1]: two people can be exposed to different sources of information (e.g., news stories that reflect opposite positions), or they can be exposed to similar sources of information but reach different conclusions. Techniques for measuring bias in our sense may help disambiguate these two sources. For example, for two people who have very different world views, if we find that they have low levels of bias according to our definition, it would increase our confidence that they were exposed to different sources of information.
[1] Nika Haghtalab, Matthew O. Jackson, Ariel D. Procaccia: Belief polarization in a complex world: A learning theory perspective. Proc. Natl. Acad. Sci. USA 118(19): e2010144118 (2021).
> The problem studied in this paper is determining whether the bias is below or above a given threshold $\tau$. A closely related, and probably more common problem is directly estimating the bias level using signaling. Can the authors share their thoughts on this? For example, what's the main difficulty of this new problem? Do the authors believe (variants of) constant schemes are still optimal?
Our approach can be directly employed to estimate the bias up to an $\epsilon$ error, by solving the threshold problem $\log(1/\epsilon)$ times, through binary search. This requires adaptive signaling schemes and constant schemes are not sufficient. We mentioned this in line 55, but we will certainly elaborate on this point (which was brought up in some form by all reviewers) in the revision.
---
Rebuttal Comment 1.1:
Comment: We would greatly appreciate your response to the rebuttal, which, in our view, effectively addresses your main concerns. If there are lingering questions, we would gladly engage in a discussion. | Summary: The paper studies the problem of detecting whether an agent is updating their prior beliefs given new evidence in a Bayesian way, or whether they are biased towards their own prior. The paper considers a setting where biased agents form posterior beliefs that are a convex combination of their prior and the Bayesian posterior (parameterized by an unknown parameter $\omega$). Given a fixed $\tau$, the paper takes an information design to detect where $\omega\ge \tau$ or $\omega\le \tau$. In particular, one can design a signaling scheme and observe that actions taken by the agents to infer the bias level $\omega$. The paper aims to minimize the number of rounds (each round deploying a signaling scheme) needed to detect whether $\omega\ge \tau$ or $\omega\le \tau$.
The main results of the paper show that (1) a fixed signaling scheme suffices to achieve minimum number of rounds used detect whether $\omega\ge \tau$ or $\omega\le \tau$; (2) a computationally efficient algorithm to compute such optimal signaling scheme for detecting whether $\omega\ge \tau$ or $\omega\le \tau$.
Strengths: In practice, it is widely observed that human belief updating with uncertainty is usually not following the Bayesian manner due to various biases and oftentimes such biases remain unknown to the system. Thus, it is natural to study how to detect such human bias and thus I think the motivation of this work is strong. The paper uses a novel approach via information design to detect human bias level. This, in addition, also opens up another application of information design/Bayesian persuasion. The paper also presents that using such approach is computationally efficient (for constant number of states and actions).
The paper is also well-written and results are stated in a clear way. Overall, I think this paper is a good addition to NeurIPS.
Weaknesses: There may be an immediate future direction is like given any $\varepsilon\ge 0$, what is the sample complexity of narrowing down the unknown bias level $\omega$ to be in an $\varepsilon$-interval? I feel $O(\log \frac{1}{\varepsilon})$ number of rounds using binary search should suffice (though you may need to carefully handle the randomness of realized signals).
There is a previous work "HOCMP 2021 — On the Bayesian Rational Assumption in Information Design", where that work presents some real-world behavior experiments showing that humans are updating beliefs really not in a Bayesian way, and show that a convex combination of the prior and the Bayesian posterior can better explain human belief updating. The authors may want to include HOCMP 2021 and discuss its connections.
Technical Quality: 4
Clarity: 4
Questions for Authors: See above
Confidence: 5
Soundness: 4
Presentation: 4
Contribution: 3
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > There may be an immediate future direction is like given any $\epsilon$, what is the sample complexity of narrowing down the unknown bias level $\omega$ to be in an $\epsilon$-interval? I feel $\log \frac{1}{\epsilon}$ number of rounds using binary search should suffice (though you may need to carefully handle the randomness of realized signals).
Absolutely, we allude to this in line 55, but we will certainly elaborate on this point (which was brought up in some form by all reviewers) in the revision.
> There is a previous work "HOCMP 2021 — On the Bayesian Rational Assumption in Information Design", where that work presents some real-world behavior experiments showing that humans are updating beliefs really not in a Bayesian way, and show that a convex combination of the prior and the Bayesian posterior can better explain human belief updating. The authors may want to include HOCMP 2021 and discuss its connections.
Thanks for the excellent pointer; this will strengthen our case for focusing on the linear model of bias (convex combination of prior and Bayesian posterior), which we currently support via references [8,10,5] in line 39.
---
Rebuttal Comment 1.1:
Comment: Thank the authors for your response. I do not have further concerns. | Summary: The paper studies a Bayesian persuasion problem involving a biased receiver. In this model, the bias is defined by how the receiver deviates from the Bayesian posterior, which is a convex combination of the prior and the induced posterior. The authors propose algorithms to test whether this bias exceeds a fixed constant and discuss scenarios where it is possible or not to fully detect the bias level. In cases where it is possible to detect this bias level, they propose algorithms to solve the problem polynomially using direct signaling schemes.
Strengths: - The problem studied in this paper is of interest to the Bayesian Persuasion community.
- The geometric characterization of the problem is interesting and clear, specifically the characterization of whether or not it is possible to learn the bias level presented in section 5.
Weaknesses: - Under the assumption that the sender knows everything about the environment (prior, utilities, etc.) and only needs to determine if the bias level is larger than a constant, I think that the results are not difficult to derive. The paper presents some characterizations, such as the use of constant algorithms and the study of instances in which it is possible to determine with one sample whether the bias level is greater than a constant, or whether it is not possible even with infinitely many samples. However, these characterizations alone are not sufficient to justify accepting the paper.
- The sample complexity problem studied in this paper is quite different from the classical notion, where you are given two parameters, $\epsilon$ and $\delta$, and you want to output an $\epsilon$-optimal solution with probability $1 - \delta$. I would have found it much more interesting (and standard) to compute a confidence bound for the bias level with high probability, rather than determining if the bias is larger than a constant.
- Finally, the main theorems are not clearly stated. For example, in the statement of Theorem 3.1, you refer to "the above signaling scheme," which requires the reader to look for this scheme on the preceding page. Additionally, in the main theorem, the final sample complexity of the optimal constant signaling scheme is not specified. This makes it challenging to have a high-level understanding of the contribution of your main theorems.
Technical Quality: 3
Clarity: 2
Questions for Authors: - In sample complexity problems, you are usually given two parameters $\epsilon$ and $\delta$, and you want to output an $\epsilon$-optimal solution with probability $1-\delta$. In your model, this can be formulated as learning the bias up to an $\epsilon$ error with probability $1-\delta$. My question is, why not study this version of the problem? Can your approach be employed to tackle this version of the problem as well?
- Can your algorithm be extended to scenarios with a multi-type receiver?
- In the case of finite sample complexity, is it possible to achieve a better dependence with respect to $\tau$, or are your results tight?
Confidence: 5
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: NA
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > The sample complexity problem studied in this paper is quite different from the classical notion, where you are given two parameters, $\epsilon$ and $\delta$, and you want to output an $\epsilon$-optimal solution with probability $1 - \delta$. I would have found it much more interesting (and standard) to compute a confidence bound for the bias level with high probability, rather than determining if the bias is larger than a constant. [...] In sample complexity problems, you are usually given two parameters $\epsilon$ and $\delta$,, and you want to output an $\epsilon$-optimal solution with probability $1-\delta$. In your model, this can be formulated as learning the bias up to an $\epsilon$ error with probability $1-\delta$. My question is, why not study this version of the problem? Can your approach be employed to tackle this version of the problem as well?
We believe that we can effectively address this concern, and would be happy to add the discussion below to the paper.
We first note that our approach can be directly employed to estimate the bias up to an $\epsilon$ error, by solving the threshold problem $\log(1/\epsilon)$ times, through binary search, as briefly mentioned in line 55. We will expand on this in the revised paper.
Let us now explain why we did not include a confidence parameter, and why our results directly imply confidence bounds if one did want to include one. First, we note that unlike the standard PAC setting (where there is some underlying distribution), the only source of randomness in our problem is the signaling scheme. As we show, the optimal scheme maximizes the probability $p$ of obtaining useful signals. If that probability is $p$, the optimal scheme needs $1/p$ samples in expectation. You could ask instead about the number of samples $t$ needed for a success probability of $1-\delta$, but given this discussion, the answer is straightforward: $(1-p)^t\leq \delta$, which is satisfied when $t \ge \frac{1}{p} \log \frac{1}{\delta}$.
In other words, our current formulation implicitly provides confidence bounds, but we agree it would be valuable to make this more explicit.
> The main theorems are not clearly stated. For example, in the statement of Theorem 3.1, you refer to "the above signaling scheme," which requires the reader to look for this scheme on the preceding page. Additionally, in the main theorem, the final sample complexity of the optimal constant signaling scheme is not specified. This makes it challenging to have a high-level understanding of the contribution of your main theorems.
For Theorem 3.1, we note that it's common to refer in a theorem statement to an algorithm that is defined previously, say "Algorithm 1." In our case, we felt it would be awkward to put the signaling scheme on lines 158-161 in an environment ("Signaling Scheme 3.1") because it's so simple. However, we are open to doing so if it would improve readability.
The point about the Theorem 4.6 is more important as it may stem from a misunderstanding. The final sample complexity of the optimal signaling is equal to $1/p^*$, where $p^*$ is the objective value of the linear program (Equation 6, Algorithm 1) and represents the probability of useful signals. This sample complexity depends on the geometry of the instance and lacks a simple closed-form solution. The main contribution of Theorem 4.6 — which builds on Lemmas 4.1-4.5 — is algorithmic. Section 4 as a whole demonstrates that the optimal signaling scheme can be computed efficiently. While Section 5 provides insights into the sample complexity (and Theorem 3.1 gives tight bounds for a special case), we emphasize that the general case where the sample complexity is finite but greater than 1 does not appear to have a more detailed, concise characterization.
> Can your algorithm be extended to scenarios with a multi-type receiver?
The concept of a "multi-type receiver" could be interpreted in various ways.
If the sender knows the receiver's type in each round, we believe our results can be directly extended to this multi-type setting.
However, if the sender does not know the receiver's type, we conjecture that testing the receiver's bias becomes impossible. The intuition is that if different types consistently take "opposite" actions, the receiver's observed action provides no information about their bias.
> In the case of finite sample complexity, is it possible to achieve a better dependence with respect to $\tau$, or are your results tight?
Our results are tight for both the two-state-two-action case (Theorem 3.1) and the general case (Theorem 4.6), as they provide the optimal signaling schemes.
---
Rebuttal Comment 1.1:
Comment: We would greatly appreciate your response to the rebuttal, which, in our view, effectively addresses your main concerns. If there are lingering questions, we would gladly engage in a discussion. | null | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Towards Dynamic Message Passing on Graphs | Accept (poster) | Summary: This work proposes a new dynamic message passing method $N^2$, which initializes a number of pseudo nodes, and apply message passing across graph nodes and pseudo nodes. The message passing scheme relies on the proximity of the node embeddings, and the node embeddings are updated through message passing. The method is quite dynamic and flexible. Besides, it is empirically shown to alleviate oversquashing and oversmoothing problems. Experimental results show the significance of the proposed method.
Strengths: - Overall, the writing is good and clear, the concepts are precisely described, and the introduction of background is easily understandable.
- The methodology is quite interesting, and the message passing design makes sense.
- The experimental results are strong on most datasets. And the method is widely applicable to both graph and node level prediction. The ablations are carefully designed and the visualizations are pretty good.
- The proposed method exhibits good performance on oversquashing and oversmoothing problems.
Weaknesses: - It would be good to have theoretical explanations why the method works well against oversquashing and oversmoothing problems, though not mandatory.
- It would be good to compare with some graph structure learning or graph rewiring work. The baselines are basically GNNs and graph transformers.
Technical Quality: 3
Clarity: 4
Questions for Authors: - It is not intuitive to me why to use recurrent layers, except for the good property of weight sharing. Can you explain it?
- Why use proximity measurement instead of some distance metrics?
Confidence: 3
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: As mentioned by the authors, the performance deteriorates when the number of recurrent layers grow too large.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your kind suggestions and insightful questions about our work. We hope the following response can address your concerns. We also kindly refer you to the PDF in the global response for the new figures/tables during the rebuttal due to the limited number of characters.
**W1**: Theoretical explanations on why $N^2$ works for over-squashing/smoothing.
**Response**: We provide some intuitive and sketchy theoretical analysis. We will keep working on this in the future.
- `Over-smoothing`: $N^2$ learns the displacements of nodes and gives rise to the linear combination of the layer input, the local message passing output, and the global message passing output. Specifically, the layer input represents the output of an `all-pass filter`, the local output is from a `low-pass filter`, and the global output is from a filter that aggregates messages from nodes of learnable similarity, and thus possesses `learnable frequency characteristics`. The messages from the all-pass filter and the learnable filter prevent the output from over-smoothing.
- We must note that global message passing with the uniform connection provides the SAME global output for all the graph nodes. This output can be regarded as a shared bias term and cannot alleviate over-smoothing. Please refer to *Fig R1b in the PDF response* for empirical results.
- `Over-squashing`: $N^2$ tackles over-squashing by detouring from the original graph bottlenecks and avoiding forming new pseudo-node bottlenecks.
- First, pseudo nodes provide two-hop message highways for graph nodes. Therefore, graph nodes do not have to pass messages through the original bottlenecks.
- Second, the pseudo-node bottlenecks stem from the overwhelming number of graph nodes in the receptive field of each pseudo node. However, by multiplying edge weights with the messages from these nodes, certain messages can be eliminated and thus exclude the nodes from the receptive field. We can define the effective receptive field size based on the value of the edge weights, where nodes with smaller edge weights are eliminated from the effective field. This elimination reduces the size of the effective receptive field and avoids forming bottlenecks.
- Uniform connection shares the same edge weights for the messages from all nodes. Therefore, these messages are either completely eliminated or preserved, which cannot reduce the size of the effective receptive field. In *Fig R1a in the PDF response*, we show that $N^2$ with uniform connection is less effective in tackling over-squashing.
- Our dynamic connection measures the specific edge weights for each node, learning to eliminate messages from certain nodes, and thus reduce the size of the effective receptive field.
**W2**: Comparison with graph structure learning or graph rewiring methods.
**Response**: Thank you for your suggestion. There are some graph transformer or GNN methods in our manuscript that also employ graph structure learning or rewiring strategies, such as Exphormer with the expander graph and DRew, GPRGNN, and H$_2$GCN with edge shortcuts. Except for these methods, we also compared $N^2$ with DGM (L79-80), a graph structure learning method, and the results are presented in Tab. R3. $N^2$ shows better performance compared to DGM.
| Table R3 | CORA | CITESEER | PUBMED |
| :------- | :--: | :--: | :--: |
| DGM | 84.6 | 74.8 | 88.6 |
| $N^2$ | **85.4** | **77.6** | **90.6** |
**Q1**: Why use the recurrent layer?
**Response**:
- `Why recurrent layer`: The recurrent layer is the core of $N^2$. It is an implementation of the message passing in the common space from Sec 3. The recurrent layer, including pseudo-node adaptation and dynamic message passing, empowers dynamic interactions between nodes. $N^2$ employs the layer to move nodes recursively to their optimal positions.
- `Why recurrence`: The main reason $N^2$ uses the parameters recurrently for the layer is to **largely reduce the number of parameters**. According to the ablation study in Fig. 8, nodes require multiple steps to move to their optimal position. Each step corresponds to a new layer with a set of parameters. To reduce the number of parameters, we conducted an ablation study and found that the layers can learn to move nodes based on their current positions. Therefore, a single set of parameters will be sufficient and thus give rise to the recurrent layer.
**Q2**: Why propose proximity measurement instead of distance metrics?
**Response**: Compared to distance metrics, such as Manhattan and Euclidean distance, proximity measurement has **lower spatial complexity**. We tested distance and proximity with `torch.cdist` and `torch.matmul` during model design. Results show that distance-based $N^2$ is on par with proximity-based $N^2$, but the distance version encounters out-of-memory problems under many hyperparameter settings. Therefore, we choose proximity (inner product) for $N^2$.
---
Rebuttal Comment 1.1:
Comment: Thank you for your feedback, it answers my questions.
---
Reply to Comment 1.1.1:
Comment: We sincerely appreciate your feedback and are glad that our responses addressed your questions. We will further revise our manuscript following your guidance. | Summary: The authors propose a dynamic message-passing scheme where graph nodes and learnable pseudo nodes are projected into a common space. This allows non adjacent nodes to communicate immediately whilst retaining linear complexity. The model performs well on a range of benchmarks and can reduce problems with over-squashing/over-smoothing compared to a baseline MPNN.
Strengths: - The novel (to my knowledge) method is explained in depth and well outlined. It is clearly explained and evaluated in the context of other approaches (rewiring, virtual node, transformers).
- The results/experiments are really strong and the method is shown to help alleviate over-squashing/over-smoothing and performs well on a wide range of benchmarks.
- Visualising the distribution of embedded nodes helps with understanding what is happening in practice and I really like this empirical understanding.
- It is clear how this method lowers the computational complexity over methods such as Graph Transformers and the results indicate it can still outperform these approaches.
Weaknesses: - In line 45 the authors argue for their approach over using a single virtual node due to the fact that a VN can have a message-passing bottleneck. Firstly, I think this should be explained more rigorously rather than citing [1] as it is a key argument of the paper to not just have uniform global connections. Additionally, the extent to which your method alleviates this issue needs to be better explained. [Line 297], You use the balanced load argument to say that this helps alleviate the bottleneck. However, increasing connections between nodes within a community and not having global connections between communities would actually `increase' the bottleneck on those edges between the communities. The virtual node itself has less of a bottleneck BUT this does not mean bottlenecks in the graph are reduced.
- On a similar note to above, we can reduce the bottleneck of the virtual node by subsampling edges (expander) or by adding more of them (this effectively increases their width) [1]. It is not clear in the paper how your dynamic and weighted approach improves over this (in terms of bottlenecks or some other property).
- There are other dynamic message-passing schemes such as DRew [2]. The paper seems to argue for their approach over something like this due to pseudo nodes directly enabling global message-passing [line 82]. Given that these rewiring approaches seem relevant and closely-connected, I think this comparison and why your method can improve over this needs to be extended and evaluated in depth. For instance, in this case why would you need global message-passing in layer 1 when we always use > 1 layers?
[1] Shirzad et al. EXPHORMER: Sparse Transformers for Graphs. ICML 2023.
[2] Gutteridge et al. DRew: Dynamically Rewired Message Passing with Delay. ICML 2023.
Technical Quality: 3
Clarity: 3
Questions for Authors: - Do you use positional/structural encodings on these benchmarks? Does your baseline GIN/GCN + pseudo node also use these encodings?
- It is not clear how the hyperparameter sweep is performed and what parameters are optimised over and on what metric. Could you provide some information on this?
- Do you have any intuition why the method would outperform GTs on this benchmark? (not just have a lower computational complexity)
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: some limitations are outlined in the appendix.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you very much for the valuable suggestions for our paper. Under your guidance, we have discovered new advantages of $N^2$ compared to uniform connections and rewiring. We also kindly refer you to the PDF in the global response for the new figures/tables during the rebuttal due to the limited character number.
**W1&W2**: More explanation of the pseudo-node bottleneck and why $N^2$ can reduce the bottlenecks in both pseudo nodes and input graphs.
**Response**:
1. `Uniform connection leads to pseudo-node bottlenecks` has been explained in L42-45 and Fig 1, before citing [1]. More rigorously, [1,R1] refer to **bottlenecks as nodes with large effective receptive fields**. The uniform connection forms pseudo-node bottlenecks by taking all the graph nodes into the receptive field equally. This results in large effective receptive fields on large-scale graphs and thus forms bottlenecks. We also evaluate $N^2$ with the uniform connection in *Fig R1 in the PDF response*, which shows inferior performance in tackling over-squashing.
2. `Adding pseudo nodes` with uniform connection provides more pseudo-node bottlenecks, where each pseudo node still incorporates all graph nodes into its receptive field.
3. `Subsampling edges` can remove some informative interaction between nodes, leading to sub-optimal message passing.
4. $N^2$ `for pseudo-node bottlenecks`: we have provided both intuitive and empirical analysis in L48-49 and Sec 5.3.3 to show that $N^2$ DOES NOT form pseudo-node bottlenecks. $N^2$ learns the edge weights between graph and pseudo nodes through spatial relations. Therefore, **it can eliminate messages from certain nodes while preserving the others**, reducing the size of the effective receptive fields.
- Note that L297 refers to the pseudo-node bottlenecks specifically. We will fix this in the revision.
5. $N^2$ `for input-graph bottlenecks`: we have provided empirical results in Sec 5.3.2 that $N^2$ can tackle over-squashing. This is because $N^2$ **optimizes the global two-hop message highways based on specific tasks and avoids forming new pseudo-node bottlenecks**. The global highways detour messages from the input-graph bottlenecks. Therefore, the original bottlenecks may still exist but are well circumvented.
**W3**: Comparison with rewiring methods (DRew).
**Response**: Thank you for bringing up this insightful question to us. The rewiring methods and pseudo-node methods actually represent the local-to-global and the global thread, respectively. Both threads are effective in improving message passing. We choose DRew[2] to represent rewiring methods, which has been referred to by many reviewers. The main reasons we need globality in one layer are:
- `Less layers`: Pseudo-node methods require less layers than rewiring methods. For example, 6-layer $N^2$ surpasses 13-layer DRew in Tab S7 in the paper. We note that DRew with positional encodings will not be taken into comparison, because of the injected globality into node features.
- `Less memory usage`: Rewiring methods either directly add edges to the input graphs or gather messages from multi-step neighbors in one layer. Both require more memory usage. We evaluate DRew on the same dataset in Sec 5.3.1. DRew is out-of-memory when the number of graph nodes surpasses 200,000 with 2 convolution layers and 8,000 with 3 layers.
- `Better efficiency`: Rewiring edges is also time-consuming. We compare the computational complexity between $N^2$ and DRew with the same number of convolution layers/recursive steps. *Fig R2 in the PDF response* shows that DRew requires more time when scaling to large graphs.
- `Simpler preprocess`: $N^2$ only requires the pre-construction of the adjacency matrix while DRew sample or select multi-step neighbors. We even encountered the timeout problem when trying a larger number of layers (5,6) on DRew on large graphs such as ogbn-arxiv.
Besides DRew, we have also compared with other rewiring methods, such as GPRGNN and H$_2$GCN in Tab 3 and 4.
**Q1**: Positional/structural encodings usage.
**Response**: No positional or structural encodings are used for $N^2$. Except for being specifically noted, all the baseline results are from the original papers. Among X+pseudo node methods, only MPNN+pseudo node uses the positional encoding.
**Q2**: Hyperparameter settings.
**Response**: We perform grid search for the hyperparameters based on the loss of the validation split, including the recursive steps $\in$[1,10], the hidden and the state space dimension $\in${64,128}, the number of pseudo nodes $\in${8,16,32,64,128,256,300,320}, and dropout $\in$[0,0.8] with step equals 0.1. The rest of the hyperparameters are fixed as reported in the Appendix and have not been specially tuned on individual datasets.
All the learnable parameters in $N^2$ are optimized during training, including the weights in the linear transformation, $\lambda$ in the proximity measurement, and the pseudo/class-node states. The cross-entropy loss is adopted for classification and the L1 loss for regression.
**Q3**: Why $N^2$ outperforms graph transformers.
**Response**: We attribute this to the flexible pseudo nodes:
- `Simpler objective`. Instead of optimizing relations with all the other nodes, $N^2$ empowers the graph nodes to focus on relations with pseudo nodes, which are of a much smaller amount.
- `Information filter`. The pseudo nodes can also be regarded as the information bottleneck [R2] in the graph space which filters out redundant information. This can be supported by the pseudo-node ablation in Fig. 7, where node amounts are upper-bounded to serve as an effective information filter.
[R1] On the Bottleneck of Graph Neural Networks and its Practical Implications, ICLR21
[R2] Deep Variational Information Bottleneck, ICLR17
---
Rebuttal Comment 1.1:
Comment: Thank you for your detailed response.
The comparison to Drew and other rewiring based approaches is very compelling.
I understand that your method can decrease pseudo-node bottlenecks but a better theoretical understanding of overcoming input-graph bottlenecks beyond the TreeMatch experiment is still lacking. The addition of a VN for the TreeMatch experiment does improve the paper and I have raised my initial score.
---
Reply to Comment 1.1.1:
Comment: Thank you very much for your supportive feedback. We acknowledge that a theoretical understanding of how pseudo-node methods can detour messages from the input-graph bottlenecks is important. Due to time limitation, we provide two primary pathways to study this problem:
- Measuring the curvature between graph nodes with and without pseudo nodes.
- Given the messages to be passed between nodes i and j, recovering the messages from the pseudo-node pathway and the input-graph pathway to measure the information loss.
For the first pathway, we performed an evaluation on amazon-ratings. By employing pseudo nodes, $N^2$ `creates message highways with positive curvature for 95% of graph-node pairs that are negatively curved on the input graphs.` Edges with `high negative curvature` cause the input-graph bottlenecks[R3]. This shows that **$N^2$ can overcome the input-graph bottlenecks by producing non-negatively curved message highways.**
Your suggestions have enlightened us to further validate our methods from a theoretical perspective. We will keep studying this problem.
[R3] Understanding over-squashing and bottlenecks on graphs via curvature. ICLR22 | Summary: This paper proposes an adaptive message passing scheme for Graph Neural Networks that is based on learnable "pseudonodes" which, to a certain extent, decouple the paths along which node features are propagated from the topology of the underlying graph. Both pseudonodes and regular nodes in the underlying graph are embedded in a common space, which allows to utilize the common embeddings to generate proximity-dependent relations between nodes and pseudonodes that are used for a sparse "global" message passing layer. The proposed method is evaluated in a node and graph classification experiment for several benchmark data sets and the authors find superior performance on several data sets.
Strengths: [S1] The authors propose a new adaptive message passing scheme that introduces learnable pseudo-nodes and pseudo-edges, thus introducing dynamic pathways for message passing that are independent of the graph topology and that can be trained for a given learning task. The specific combination of pseudonde message passing and the use of recurrent layers is - to the best of my knowledge - new and original.
[S2] The method is evaluated against several baseline methods for a node and graph classification task in 18 small and large-scale benchmark data sets. The experiments show superior performance for the proposed model in several of the data sets (all six data sets for graph classification, eight out of twelve data sets for node classification).
[S3] Addressing limitations of GNNs that are due to over-squashing and over-smoothing, the authors address an important open issue in deep graph learning.
Weaknesses: [W1] I did not find the motivation to add additional learnable parameters to Graph Neural Networks that decouple message passing pathways from the topology of the input graph convincing. The authors motivate their work based on over-smoothing and over-squashing in GNNs but in my view the paper lacks an intuitive explanation why the proposed dynamic messing passing scheme should mitigate those problems, especially since the architecture additionally includes regular (local) message passing. To this end, I think that the different, complex local and global components of the dynamic message passing - though formally defined - are not explained well in the paper and some of the design choices appear to be rather arbitrary.
[W2] I similarly could not follow the motivation for the addition of the recurrent layer, which the authors argue is added "to parameterize the displacements of embedded nodes" and to "revise the learned distribution [] of all embedded nodes and reshape the dynamic message passing pathways". A better explanation would be helpful.
[W3] The idea to add pseudonodes for neural message passing has been previously explored, e.g. in the form of a sparse attention mechanism with so-called "global nodes" in the Exphormer architecture (https://arxiv.org/pdf/2303.06147). A better explanation of the contribution of the authors would be helpful, especially since the experiments show that the performance is very close to this architecture in many of the experiments.
[W4] There are important details missing in the description of the experimental setup, namely whether hyperparameter tuning has been performed (i) for the proposed mod4el and (ii) for the baseline models. Moreover, in section B.2.2 the authors claim that "the detailed hyper-parameter settings on all benchmarks are reported in Tab. S6", however this table only includes hyperparameters for the proposed N2 model and not for baseline methods. I also checked the provided code, which does not cover the baseline experiments.
As it is - despite the claims made in the answer to Q4 in the checklist - I do not consider the results showing superior performance compared to the baseline models reproducible based on the information provided in the paper and in the supplementary material.
[W5] One could argue that the proposed approach to define pseudo-nodes and pseudo-edges that participate in the message passing could also be seen as a learnable graph pooling layer for GNNs, where pseudonodes take the role of supernodes. As such, I believe that the paper lacks a more detailed discussion of related works on (trainable) graph pooling operations (there is a single mention of graph pooling in the first paragraph of the related work section).
[W6] Similarly, while the paper briefly mentions hierarchical Graph Neural Networks as related work that combines local and global message passing to learn multi-scale features, no explicit comparison to such approaches is included in the experimental evaluation.
Technical Quality: 3
Clarity: 2
Questions for Authors: I kindly ask the authors to address the following questions, which emerged during my review of the work:
- Please provide more insights on the motivation behind the different ingredients of the dynamic message passing scheme, especially on (i) why the proposed local and global message scheme is supposed to address over-smoothing and over-squashing, and (ii) the motivation of the additional recurrent layer (see comments in [W1] and [W2]). For the first question, I see that there is an experimental analysis but I failed to see how this analysis supports the claims. For the latter question, it would be helpful to include an ablation study that removes the recurrent layer altogether. If I understood the ablation study correctly, currently only the influence of the number of recurrent layers is investigated. Or does the removal of the pseudo-code adaptation (Table 5) correspond to the removal of the recurrent layers?
- Please clarify your contribution over other approaches using adaptive message passing architectures that combine local and non-local message exchanges (see comments in [W3], [W5] and [W6]).
- Please provide details on the choice of hyperparameters and the use of hyperparameter tuning, both for the baseline methods and the proposed architecture (see detailed comments in [W3])
- Please add details on the ablation study results shown in Table 5, especially the number of experimental runs and the standard deviation of results.
- Please increase the font of the results in tables 1 - 4. The text is too small to be readable in a printed version. Please also increase the size of the figures, especially figure 5 and 6, which are way too small to be readable in a printed version.
During my review, I found the following typo:
- line 259: Complex_i_ty Analysis
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: In light of the open questions outlined above, I consider the discussion of limitations in appendix E overly short and not complete.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you very much for your valuable suggestions that help us improve our paper. We respectfully refer you to the PDF in the global response for the new figures/tables during the rebuttal due to the limited character number.
**W1&Q1(a)**: Why $N^2$ mitigates over-smoothing/squashing.
**Response**:
1. `Over-smoothing`
- Unlike the uniform connection that shares edge weights for all the graph nodes, our dynamic connection measures specific relations for each graph node. Therefore, although two nodes are connected on the input graphs, they receive different outputs from global message passing (MP) and avoid becoming too similar to each other.
- Learning node displacements allows the addition of layer input, local and global output. Thus, even with local MP, the other two features maintain high-frequency signals.
2. `Over-squashing`
- Pseudo nodes produce two-hop message highways, detouring messages from the graph bottlenecks.
- Dynamic connections avoid forming new pseudo-node bottlenecks that hinder global MP.
3. `Why local MP`: previous studies show the importance of encoding graph structures[R1]. Therefore, we adopt local MP to implicitly encode graph structures.
> To further evaluate our dynamic connection, we apply the uniform connection to $N^2$, which shows **performance degradation in tackling over-smoothing/squashing problems** in *Fig R1 in the PDF response*.
**W2&Q1(b)**: Motivation and ablation on the recurrent layer.
**Response**:
- `Motivation for the layer`: We propose the recurrent layer, including BOTH pseudo-node adaptation and dynamic MP, to move nodes from current positions towards the optimal positions (L147-148).
- To move towards optimal position: Given the current node positions, a recurrent layer learns the displacements the nodes should take, changing their distribution in the space.
- To base on current positions: The recurrent layer employs pseudo-node adaptation and bases the edge weights for MP on the spatial relations between nodes, to adapt to nodes in various positions. The changes in position also reshape the spatial relations and thus reshape the message pathways.
- `Motivation for the recurrence`: We adopt a single recurrent layer used recurrently to reduce the number of parameters, instead of modeling node displacement by different layers for each step.
- `Ablation`. We have already evaluated layers without recurrence in Sec 5.3.4. Multiple recurrent layers do not share parameters and thus are not recurrent. We further remove the layer completely. With only input and output modules employed, *Tab R1(b) in the PDF response* shows performance degradation.
**W3&Q2(a)**: Contribution compared to Exphormer.
**Response**:
1. Compared to Exphormer, we highlight our $N^2$ as follows
- `A new perspective` to model MP, where nodes are distributed in a common space; they communicate based on their spatial relations and learn to optimize their positions.
- `Fewer parameters` by a single shared recurrent layer.
- `Better scales up to large graphs` (Tab 4).
2. Technically speaking, our $N^2$:
- `decouples global MP` into g(raph)-nodes to p(seudo)-nodes, p to p, and p to g. Instead, Exphormer performs MP between all nodes, which cannot differentiate between messages from pseudo or graph nodes.
- `multiplies proximity` with messages. Instead, Exphormer employs normalized attention, which can yield equally small weights and attenuate the messages, especially when the optimal assignment involves a large number of graph nodes to the same pseudo node.
> We also evaluate the mixed messages or attention from Exphormer on $N^2$, where both show performance degradation in *Tab R1(c) in the PDF response*.
**W4&Q3**: Hyperparameters.
**Response**: We have performed grid search for the hyperparameters of $N^2$ (Tab S6) and the reproduced models based on validation loss. We emphasize that the superiority of our results is valid and the baseline results can be fully reproduced. Please refer to our AC for the `URL of detailed hyperparameter config files`.
**W5&Q2(b)**: Design comparison with graph pooling.
**Response**: First, we clarify that we have provided a discussion on graph pooling in L86-91, not just in L76. Second, the differences between pseudo nodes and graph pooling can be summarized as:
- `Different objectives`. Pseudo nodes optimize communication efficiency while pooling nodes capture the hierarchy in the graph structures, ensuring better structure compression.
- `Physical connection`. Pseudo nodes are physically connected to graph nodes as part of the graphs and participate in the MP. In contrast, pooling nodes are higher-level abstractions of graph nodes and are not physically connected with them.
- `Intercommunication`. Pooling nodes use the coarse adjacency matrix for the MP. Pseudo nodes are free from structure-preserving constraints and learn fully connected pathways.
**W6**: Comparison with hierarchical GNNs.
**Response**: We have performed comparisons in Sec 5.1 with some representative hierarchical GNNs from L86-91. Please let us know if you are referring to other types of GNNs.
**Q4**: Details for Tab 5.
**Response**: Tab 5 follows the settings from Sec 5.1/5.2. and the hyperparameter setups in Tab S6. *Tab R1(a) in the PDF response* shows the standard deviation results for three runs.
**Q5**: Font size, typo.
**Response**: Thank you for your suggestions. We will fix them in our revision to ensure readability.
**Q6 (Limitation)**: More limitation discussion based on the reviewer's open questions.
**Response**: We will revise our paper based on your reviews. For Q1(i), we can only provide intuitive/empirical analysis, not rigorous proof due to limited rebuttal time. We will add this to our limitation part and keep working on it.
[R1] Do Transformers Really Perform Bad for Graph Representation? NeurIPS21
---
Rebuttal Comment 1.1:
Comment: I thank the authors for the very detailed response. While I do not consider all of my criticism to be addressed fully, I do acknowledge that some of my questions on the motivation and the relation tyo graph pooling have been answered, which is why I decided to raise my score.
---
Rebuttal 2:
Title: anonymized scripts with experiment hyperparams
Comment: Dear reviewer,
Take note that, as per the conference instructions, the authors forwarded me a link to an external anonymized page that shows the scripts they used & the associated hyperparameters.
Let me know if you would like to see something specific and I will forward you the information.
kind regards
AC
---
Rebuttal 3:
Comment: Thank you very much for your positive feedback. We will refine our manuscript based on the rebuttal, including clarifying our motivations and contributions, delineating hyperparameter setups (based on our provided config files), and increasing the font size of tables and figures. We sincerely appreciate your suggestions, which have significantly helped us improve our work. | Summary: The paper considers the problem of flexible message passing with low complexity in GNNs. To tackle this concern, the paper proposes a novel dynamic message-passing mechanism for GNNs via projecting graph nodes and learnable pseudo nodes into a common space with measurable spatial relations. Based on this dynamic message passing mechanism, the paper constructs a GNN model Named N^2 to estimate the effectiveness and efficiency of the proposed message passing mechanism.
Strengths: 1. A novel dynamic message passing mechanism for GNNs is proposed to provide flexible message passing with low complexity. The dynamic message passing is interesting.
2. The constructed N^2 GNN model is simple and effective. The complexity requirement is guaranteed.
3. The proposed N^2 holds superior performance.
Weaknesses: See questions.
Technical Quality: 4
Clarity: 4
Questions for Authors: 1. The analysis of the dynamic message-passing mechanism. Can we learn an additional pseudo node to provide global guidance (like a cluster center)?
2. Can authors provide the discussion between dynamic and adaptive message passing?
3. Can authors show the possibility of incorporating the dynamic message passing mechanism into other networks?
Confidence: 5
Soundness: 4
Presentation: 4
Contribution: 4
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you very much for your support and valuable suggestions that further broaden the potential application of our method to graph interpretations and other neural networks. We will keep studying these problems in the future.
**Q1**: Can we learn an additional pseudo node to provide global guidance?
**Response**: If we understand your question correctly, the additional pseudo node refers to the pseudo node of the current pseudo nodes. Please let us know if we make a mistake.
- `Global guidance in` $N^2$. Given the number of pseudo nodes is relatively small, we directly apply **dense pairwise message passing between pseudo nodes** in the original $N^2$, which empowers the pseudo nodes to **capture global information**. The captured global information can be further fed back to graph nodes and serve as global guidance. The learning of global information can be supported by $N^2$ for graph classification where we employ pseudo-node states as the graph-level outputs.
- `Additional pseudo nodes`. Following your suggestion, we further add an additional pseudo node that aggregates messages from the current pseudo nodes. The results are presented in Tab. R2. We can see that the additional pseudo node cannot further benefit $N^2$. However, regarding the pseudo nodes as cluster centers is interesting, and may help us to extend $N^2$ to graph interpretations. We will keep studying it in the future.
| Table R2 | Addition Pseudo Node | Original |
| :-- | :--: | :--: |
| amazon-ratings | 49.41 | 50.25 |
| AmazonPhoto | 95.36 | 95.75 |
| PROTEINS | 76.48 | 77.53 |
**Q2**: Discussion between dynamic and adaptive message passing.
**Response**: We kindly refer you to the introduction and related works in our manuscript where we have discussed dynamic message passing with adaptive message passing. To summarize, dynamic message passing `achieves global message passing without forming new bottlenecks` and `requires linear computational complexity`. Dynamic message passing also empowers shared parameters thus `reducing the number of parameters required`.
**Q3**: Broader application of the dynamic message passing.
**Response**: Thank you for bringing up this interesting and inspiring question to us. First, in the context of graph representation learning, dynamic message passing currently only employs neighbor smoothing for local message passing to keep our method as simple as possible. However, `it can be further combined with other graph models to improve the effectiveness of our local message passing`.
Second, our dynamic pseudo-node strategy `can also be applied to networks such as Transformer to reduce their computational complexity`. This is a very interesting question and we will dig it up in the future.
---
Rebuttal Comment 1.1:
Comment: Thank you for the detailed response. The authors make a clear explanation of my concerns. I retain my rating score.
---
Reply to Comment 1.1.1:
Comment: Thank you very much for your positive feedback and support. Your suggestions have helped us improve our work. We are glad that our responses addressed your concerns. | Rebuttal 1:
Rebuttal: We appreciate all the valuable suggestions and the time the reviewers take on our work. Your reviews help us a lot to improve the manuscript.
**Summary of strengths**. We sincerely appreciate that you find our method:
- novel and interesting (reviewers Qj5n, ZLZY, 4Q2o, and UBFx);
- clearly explained and evaluated (reviewers 4Q2o and UBFx);
- mitigates over-squashing/smoothing, an important open issue in deep graph learning (reviewers ZLZY, 4Q2o, and UBFx);
- shows guaranteed complexity requirement (reviewers Qj5n and 4Q2o);
- provides good visualization and analysis of the model (reviewers 4Q2o and UBFx);
- exhibits promising experimental performance (reviewers Qj5n, ZLZY, 4Q2o, and UBFx);
- widely applicable (reviewers 4Q2o and UBFx).
In the subsequent responses, we aim to address your concerns comprehensively, providing an item-by-item response to each of your comments.
We have provided new empirical results suggested by the reviewers. *Due to the limitation of character numbers, Tab R1, Fig R1, and Fig R2 are displayed in the global response PDF.*
Pdf: /pdf/cea479db8b002142e68dcdcf3b5511809ce1cdf4.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
How Does Black-Box Impact the Learning Guarantee of Stochastic Compositional Optimization? | Accept (poster) | Summary: In this paper, the authors systematically analyzed the generalization error and optimization error of the stochastic compositional optimization problems (for black-box cases).
Strengths: 1. The paper is well-written and well-organized.
2. The paper provides the generalization analysis and optimization analysis for stochastic compositional optimization problems in a systematic way. The convergence rates for three different black-box cases are provided.
Weaknesses: 1. The technical challenge of extending the theoretical analysis (stability analysis and optimization analysis) for stochastic compositional optimization problems from white-box cases to black-box cases is not clearly discussed. What are the key technical tools employed to address the key challenge? It seems that the black-box gradient approximation in Eq.(5) satisfies the bounded variance assumption. So it is easy to plug this into the existing white-box analysis framework to achieve a generalization error and optimization convergence rate. What are the non-trivial techniques involved in improving the theoretical results compared with the previous work?
2. Assumption 5 for the non-convex analysis is weird. Assumption 5 implies that any stationary point, i.e., $\mathbb{E}[|| \nabla F_S(w) ||^2]=0$, to be the global optimal point $w(S)$. Is this assumption too strong for non-convex analysis?
Minor:
Typos: Eq.(5) is a second-order approximation of Eq.(4), "=" may be incorrect.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. What is the key technical challenge of extending the theoretical analysis (stability analysis and optimization analysis) for stochastic compositional optimization problems from white-box cases to black-box cases?
2. What are the key technical tools employed to address the key challenge?
3. Is this assumption 5 too strong for non-convex analysis?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: N.A.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We are grateful to you for your valuable comments and constructive suggestions.
**Q1:** What are the key challenges of extending the theoretical analysis for SCO problems from white-box cases to black-box cases? What are the technical tools employed to address these key challenges? What are the non-trivial techniques involved in improving the theoretical results compared with the previous work?
**A1:** Thanks for your constructive comment. The key challenges and technical tools of extending the theoretical analysis for SCO problems from white-box cases to black-box cases are listed as follows.
**1) Generalization:** Considering three different types of black-box SCO methods, we apply our new non-convex analysis (Theorem 3) to these cases (Theorem 4, Corollary 1 and 2) in Section 3.2. Due to the difference related to function form, there are some differences related to the upper bounds of first-order and second-order gradients of $\tilde{\nabla} f$ (please see lines 513-518) between Theorem 3 and the generalization part of Theorem 4. The differences among Theorem 3 and Corollary 1, Corollary 2 are the same as Theorem 4.
**2) Optimization:** The lines 546-547 mentioned by you are related to our optimization part. For optimization, the estimated gradient does introduce several extra terms regarding the accuracy of the gradient estimation, i.e., $\tilde{\nabla} f-(p+1/2)\beta\nabla f$ and $\tilde{\nabla} f-\beta\nabla f$. These terms are derived from some special strategies (such as a special decomposition $\tilde{\nabla} f=\tilde{\nabla} f+(p+1/2)\beta\nabla f-(p+1/2)\beta\nabla f$ in the second equality of line 546). We propose an extended lemma (Lemma 6) from [1] and combine this lemma with these strategies to limit the expansion (line 549) of $\mathbb{E}[F_S(w_{t+1})-F_S(w(S))]$ during the iterations. Otherwise, these extra terms will lead to the divergence of our result.
Finally, we want to emphasize our advantages compared with previous work related to the generalization guarantee of SCO [2].
**1) Better results:** For convex optimization, Theorem 2 leverages the co-coercivity property of convex and smooth function to provide the stability bound $\mathcal{O}((n^{-1}+m^{-1})\beta \log T)$ under milder parameter selection than [2]. And our proof is more concise since it avoids the intermediate step which measures the distance between $v$ and $g(w)$ in the analysis of [2].
**2) Non-convex guarantee:** We leverage a special lemma, almost co-coercivity lemma, to develop our proof framework to non-convex case to obtain the first stability bound $\mathcal{O}((n^{-1}+m^{-1})T^{\frac{1}{2}}\log T)$ under milder parameter selection than [1].
We have supplemented the above explanations in Section 3.2 of our new manuscript to benefit readers’ understanding.
[1] J. Duchi, et al. Optimal rates for zero-order convex optimization: The power of two function evaluations. TIT, 2015.
[2] M. Yang, et al. Stability and generalization of stochastic compositional gradient descent algorithms. 2023.
***
**Q2:** ...Is Assumption 5 too strong for non-convex analysis?
**A2:** Assumption 5 is called Polyak-Lojasiewicz (PL) condition, which is used extensively in non-convex optimization [3-5]. This assumption suggests that the function value gap $\mathbb{E}[F_S(w)-F_S(w(S))]$ is dominated by the square of gradient norm [6]. It implies all empirical local optimal parameters are empirical global optimal parameters [7]. We use it to prepare for the characterization of $\mathbb{E}[F_S(A(S)) − F_S(w(S))]$ instead of $\mathbb{E}[||\nabla F_S(A(S))||^2]$ [8]. We have added the above explanation behind Assumption 5 in our new manuscript. Besides, we have increased a section in the final part of Appendix to discuss this assumption in detail.
[3] Y. Lei, et al. Generalization guarantee of SGD for pairwise learning. NeurIPS, 2021.
[4] S. Reddi, et al. Stochastic variance reduction for nonconvex optimization. ICML, 2016.
[5] D. Foster, et al. Uniform convergence of gradients for non-convex learning and optimization. NeurIPS, 2018.
[6] Y. Bai, et al. On the complexity of finite-sum smooth optimization under the Polyak-Łojasiewicz condition. ICML, 2024.
[7] H. Karimi, et al. Linear convergence of gradient and proximal-gradient methods under the polyak-łojasiewicz condition. ECML-PKDD, 2016.
[8] A. Kuchibhotla and A. Chakrabortty. Moving beyond sub-Gaussianity in high-dimensional statistics: Applications in covariance estimation and linear regression. Information and Inference: A Journal of the IMA, 11(4):1389–1456, 06 2022.
***
**Q3:** Typos: Eq.(5) is a second-order approximation of Eq.(4), "=" may be incorrect.
**A3:** Equation (5) is consistent with the forms of previous work [9,10]. The second-order term of Taylor expansion is regarded as the remaining item taking $v=v_{t+1}^*$ where $v_{t+1}^*$ is an unknown model parameter. We don’t need to know $v_{t+1}^*$. We have further explained it behind Equation (5) in our new manuscript.
[9] K. Nikolakakis, et al. Black-box generalization: Stability of zeroth-order learning. NeurIPS, 2022.
[10] J. Chen, et al. Fine-grained theoretical analysis of federated zeroth-order optimization. NeurIPS, 2023.
***
**Q4:** What is the key technical challenge of extending the theoretical analysis (stability analysis and optimization analysis) for stochastic compositional optimization problems from white-box cases to black-box cases? What are the key technical tools employed to address the key challenge?
**A4:** This question is the same as **Q1**. Please see **A1**.
***
**Q5:** Is this assumption 5 too strong for non-convex analysis?
**A5:** This question is the same as **Q2**. Please see **A2**.
---
Rebuttal Comment 1.1:
Comment: Thanks for the authors' detailed response. My concern has been well addressed. I have no further questions.
---
Reply to Comment 1.1.1:
Comment: Thank you very much for your constructive comments and recognition of our work. | Summary: This work presents the generalization upper bound for two stochastic compositional optimization methods, SCGD and SCSC, under convex and non-convex setting. For convex setting, the presented generalization bound is tighter compared to the existing work and it matches the generalization bound of SGD for optimization without compositional structure. For non-convex setting, this work establishes the first generalization bound in SCO literature. Furthermore, this work studies the zeroth-order extension of SCGD/SCSC and presents their excess risk bound in non-convex setting. This is also new in the literature.
Strengths: 1. The main contribution of this work is the theoretical analysis as I summarized in the summary section.
2. It is interesting to see how estimation distance $\mu$ and the number of estimation directions $b$ affect the excess risk bound.
3. This work is well-written and easy to follow.
Weaknesses: I do not recognize any significant weakness in this work. But I do have some questions.
1. In Table 1, I noticed that the assumption V (bounded variance) and M (bounded function) do appear in SCGD/SCSC (Thm. 2), but not in SGD ([26] Thm. 3.8). Based on my understanding, the assumptions on both of these two analysis are the Lipschitz and smoothness of the unbiased stochastic function values. I do not see any major difference between them. Can the authors clarify what V (bounded variance) and M (bounded function) stand for?
2. In Table 2, the optimization bounds for the black-box methods seem to omit the terms involving $T,n,m$. This confuses me. Is there any reason that the $T,n,m$ terms are ignored here?
Technical Quality: 3
Clarity: 3
Questions for Authors: Please see the weakness section.
Confidence: 2
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: No significant limitations in this work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We are grateful to you for your valuable comments and constructive suggestions.
**Q1:** ...Can the authors clarify what V (bounded variance) and M (bounded function) stand for?
**A1:** Thanks for your constructive comment. As you mentioned, Lipschitz and smoothness are both assumed in our Theorem 2 and Theorem 3.8 in [1]. These two assumptions are the most common conditions in learning theory.
The bounded variance assumption (**Assumption 2**) is also a classical condition [2-5]. It limits the ranges of the variance value of the given functions $g$.
The bounded function assumption (**Assumption 4**) is different from the traditional bounded function assumption $|f|\leq M$. Assumption 4 requires the distance between two adjacent function outputs to be bounded, i.e., $$||g(w+\mu u)-g(w)||\leq M_g, |f(v+\mu u)-f(v)|\leq M_f,$$ which is milder than the traditional bounded function assumption $|f|\leq M$. Besides, it also requires the distance between two adjacent gradient outputs to be bounded, i.e., $$||\nabla g(w+\mu u)-\nabla g(w)||\leq M_g^\prime, ||\nabla f(v+\mu u)-\nabla f(v)||\leq M_f^\prime,$$ which is milder than bounded gradient condition $||\nabla f||\leq L$ [1].
The above explanations have been added in the remark behind Assumption 2 and Remark 2. We hope our explanations can help you understand Assumptions 2 and 4.
[1] M. Hardt, et al. Train faster, generalize better: Stability of stochastic gradient descent. ICML, 2016.
[2] A. Nemirovski, et al. Robust stochastic approximation approach to stochastic programming. SIAM Journal on Optimization, 19(4):1574-1609, 2009.
[3] Y. Zhou, et al. Understanding generalization error of SGD in nonconvex optimization. Machine Learning, 111(1):345-375, 2022.
[4] M. Wang, et al. Stochastic compositional gradient descent: Algorithms for minimizing compositions of expected-value functions. Mathematical Programming, 161(1-2):419-449, 2017.
[5] T. Chen, et al. Solving stochastic compositional optimization is nearly as easy as solving stochastic optimization. TSP, 69:4937-4948, 2021.
***
**Q2:** In Table 2, the optimization bounds for the black-box methods seem to omit the terms involving $T,n,m$. This confuses me. Is there any reason that the $T,n,m$ terms are ignored here?
**A2:** In Table 2, we just show the main orders of our convergence results, and their corresponding complete form are shown at the end of every proof in our Appendix. For example, in line 552, the bound is $\mathcal{O}\left(\mu^2(T^{-1}\log T+1)+b^{-1}(d_1T^{-1}\log T+d_2)\right)$. Because the order of $T^{-1}\log T$ is smaller than 1. Therefore, we just reserve $\mu^2+b^{-1}b_2$.
To increase readability, we have modified our results in Table 2 to make the comparisons more clear by selecting the specific $\mu, b$. For example, to obtain a convergence rate $\mathcal{O}\left(T^{-\frac{1}{4}}\right)$ similar to [4], we can select $\mu=\mathcal{O}\left(T^{-\frac{1}{8}}\right), b=\mathcal{O}\left(T^{\frac{1}{4}}d_2\right)$. **For $\mu$**, this parameter can even be taken as a smaller value such as $\mu=0.0001$ in [6]. Therefore, $\mu=\mathcal{O}\left(T^{-\frac{1}{8}}\right)$ is reasonable. **As for $b$**, although its value $\mathcal{O}\left(T^{\frac{1}{4}}d_2\right)$ may be so large that the time complexity of model training is increased, this work aims to first show that black-box SCO algorithm can converge and its impact factors, rather than obtaining an optimal convergence rate. The two extra parameters $\mu, b$ do impact the convergence rate compared with the corresponding first-order methods.
[6] P. Chen, et al. Zoo: Zeroth order optimization based black-box attacks to deep neural networks without training substitute models. AISec@CCS, 2017.
---
Rebuttal Comment 1.1:
Comment: Thank you for the detailed response. I will maintain my score.
---
Reply to Comment 1.1.1:
Comment: Thank you very much for your constructive comments and recognition of our work. | Summary: This paper studies stability-based generalization bound for SCGD and SCSC, as well as the convergence rated of their black-box variants. The authors provide sharper generalization bounds for these algorithms, and the first convergence bounds for the black-box variants of these algorithms under PL-conditions.
Strengths: ### Strengths:
1. **Novelty of the Problem**: The generalization bounds of stochastic optimization algorithms have been previously studied (e.g., [21]), and basic mathematical tools (algorithm stability) for such studies have been proposed. This paper is the first to study block-box SCSC. Although from the proofs of Theorems 3 and 4, the analyses for SCGD and SCSC are essentially the same.
2. **Significance of the Results**:
- **Generalization Bound**: The authors provide a generalization bound of \(O((1/n + 1/m)\log T + 1/\sqrt{n})\) for SCGD and SCSC, which is significantly sharper than the previous \(O(1/n + 1/m)\log T + 1/\sqrt{n})\) bound. This is a noteworthy contribution.
- **Algorithm Convergence**: The paper presents convergence bounds for algorithms using estimated (black-box) gradients. While the dependence on \(T\), \(n\), and \(m\) is similar to [21], there are additional terms related to the estimation distance mu and sample size b. Moreover, the authors proved this results under a milder condtion (convexity vs PL-condtion. )
There is no experiments in this paper. However, the main goal of this paper is to study the properties of existing algorithms, so experiments are not necessary. But in the remarks the authors say that their theorem provide a more "practical choise" for paratmeres, so experiments might still be helpful to demonstrate this.
### Weaknesses/Questions:
1. **Comparing Convergence Rates**: In Table 2, it is difficult to compare the convergence rate of the proposed algorithm with other methods. The theoretical results of other methods are provided with respect to \(T\), but for the proposed methods, the dependence is given with respect to \(\mu\) and \(d\). This inconsistency makes the table unclear. For instance, Theorem 3.7 of [21] also provides a way to configure \(T\). It should be made more clear that mu and b are extra terms.
- **Parameter \(\mu\)**: The parameter \(\mu\) is mentioned only once in Line 112, and it is unclear how good the \(O(\mu^2)\) bound is. It seems that \(\mu\) is used in the estimation of the gradient and is a constant. Therefore, Theorem 4 suggests that even if \(n\), \(m\) and \(b\) approach infinity, the difference between function values will converge to a constant instead of zero.
2. **Redundancy in Proofs**: In the appendix, the proofs of Theorems 3 and 4 appear nearly identical (at least for parts 1) and 2)), with the main difference being the use of the estimated gradient. Additionally, the proof of Corollary 1 repeats similar arguments. I encourage the authors to reduce redundancy, shorten the proofs, and highlight the main differences to enhance readability.
3. **Challenges of Estimated Gradient**: The current form of the proofs makes it difficult to tell the main challenges of introducing the estimated gradient in Theorem 4 compared to Theorem 3. From Lines 546-547, it seems the estimated gradient only introduces several extra terms regarding the accuracy of the gradient estimation, which appears to be standard. Clarifying the main challenges and differences would improve the paper.
Weaknesses: Please see above.
Technical Quality: 3
Clarity: 3
Questions for Authors: Please see above.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We are grateful to you for your valuable comments and constructive suggestions.
**Q1:** experiments might still be helpful
**A1:** Thanks for your constructive comment. We also think some experiments might still be helpful to demonstrate our more practical parameter selections than [1]. There are some experiments [2-5] to validate the statement in line 188.
**1) For T:** [1] provided some generalization bounds for convex SCGD and SCSC with some impractical $T$ such as $T=O(\max(n^{7/2},m^{7/2}))$ in Theorem 4. While our convex result (Theorem 2) and non-convex result (Theorem 3) can achieve similar rates even taking $T=O(\max(n,m))$ which better matches some empirical observations (Figures 1, 2 in [2], Figures 2 in [3], and Figures 2, 3 in [6]).
**2) For $\eta_t$:** Theorem 4 [1] took $\eta_t=T^{-6/7}$ which is too small when $T$ is large. While our Theorem 2 takes $\eta_t=O(t^{-1})$ closer to some empirical selections ($\eta_t=O(t^{-3/4})$ [2,3] and $\eta_t=O(t^{-1})$ [4,5]).
**3) For $\beta_t$:** Theorem 4 [1] took $\beta_t=T^{-4/7}$ which is also too small since [2,3,5] empirically select $\beta_t=t^{-1/2}$ or $\beta_t=t^{-1}$. In contrast, our theorems have no special restriction on $\beta_t$.
We have supplemented these explanations in Remark 3 of our new manuscript to benefit readers’ understanding.
[1] M. Yang, et al. Stability and generalization of stochastic compositional gradient descent algorithms. 2023.
[2] T. Chen, et al. Solving stochastic compositional optimization is nearly as easy as solving stochastic optimization. TSP, 2021.
[3] M. Wang, et al. Stochastic compositional gradient descent: Algorithms for minimizing compositions of expected-value functions. Mathematical Programming, 2017.
[4] Z. Huo, et al. Accelerated method for stochastic composition optimization with nonsmooth regularization. AAAI, 2018.
[5] J. Zhang et al. A stochastic composite gradient method with incremental variance reduction. NeurIPS, 2019.
***
**Q2:** how good the O(\mu^2) bound
**A2:** We first explain the parameters $\mu$ and $d$ again.
**1) For $\mu$:** Eq. (4) estimates the unknown gradient by the standard finite difference method, where $\mu$ denotes the approximation distance between two model parameters $v+\mu u$ and $v$. The closer the approximation distance, the more accurate the gradient estimation.
**2) For $b$:** The parameter $b$ denotes the number of approximation directions. The more approximation directions, the more accurate the gradient estimation.
In a word, our convergence rates depend on the quality of gradient estimation measured by the two parameters $\mu$ and $b$. Next, we will compare our convergence rates with other rates.
Except for providing the generalization guarantees of (black-box) SCO methods, our main aim is to theoretically show the intuition that the more accurate the gradient estimation, the better the convergence rate. For the comparisons in Table 2, we have supplemented the selection of $\mu$ and $b$ in the remarks (Section 3.2) of our new manuscript to achieve the similar convergence rates to [1-3]. For example, to obtain a convergence rate $O(T^{-1/4})$ similar to [3], we can select $\mu=O(T^{-1/8}),b=O(T^{1/4}d_2)$. $\mu$ can even be taken as a smaller value such as $0.0001$ in [6]. Therefore, $\mu=O(T^{-1/8})$ is reasonable. And, although $b=O(T^{1/4}d_2)$ may be so large that the time complexity of model training is increased, this work aims to first show that black-box SCO algorithm can converge and its impact factors, rather than obtaining an optimal convergence rate.
By selecting the above specific $\mu, b$, we have modified our results in Table 2 to make the comparisons more clear.
[6] P. Chen, et al. Zoo: Zeroth order optimization based black-box attacks to deep neural networks without training substitute models. AISec@CCS, 2017.
***
**Q3:** Redundancy in Proofs
**A3:** We have simplified our proofs in our new manuscript to enhance readability. For example, we have omitted the part using Lemma 3 in the proof of Theorem 4. To further clear these differences, we have added a table (in “Author Rebuttal”) at the head of Appendix C.
***
**Q4:** Challenges of Estimated Gradient
**A4:** We will emphasize the main challenges of introducing the estimated gradient as follows.
**1) Optimization:** Lines 546-547 are related to our optimization part. For optimization, the estimated gradient does introduce several extra terms, i.e., $\tilde{\nabla} f-(p+1/2)\beta\nabla f$ and $\tilde{\nabla} f-\beta\nabla f$. These terms are derived from some special strategies (such as a special decomposition to $\tilde{\nabla} f$ in the second equality of line 546). We propose Lemma 6 based on [7] and combine it with these strategies to limit the expansion (line 549) of $\mathbb{E}[F_S(w_{t+1})-F_S(w(S))]$ to ensure the convergence of our result.
**2) Generalization:** We apply our new non-convex analysis (Theorem 3) to three types of black-box SCO methods (Theorem 4, Corollaries 1 and 2). Different function forms lead to some differences related to the first-order and second-order gradients of $\tilde{\nabla} f$ (please see lines 513-518) between Theorem 3 and Theorem 4. The differences among Theorem 3 and Corollaries 1, 2 are the same as Theorem 4. Our new non-convex generalization analysis (Theorem 3) is developed from our new convex generalization analysis framework (Theorem 2) via replacing the co-coercivity property of convex, smooth function with the almost co-coercivity property of smooth function. Our analysis avoids the intermediate step measuring the distance between $v$ and $g(w)$ in [1], and gets better results $O((n^{-1}+m^{-1})\log T+n^{-1/2})$ under milder parameter selection.
We have supplemented the above explanations in Section 3.2 of our new manuscript to benefit readers’ understanding.
[7] J. Duchi, et al. Optimal rates for zero-order convex optimization: The power of two function evaluations. TIT, 2015.
---
Rebuttal Comment 1.1:
Title: Response
Comment: Thanks for the detailed reply. I do not have further questions.
---
Reply to Comment 1.1.1:
Comment: Thank you very much for your recognition of our work and constructive comments which have improved our work. | null | null | Rebuttal 1:
Rebuttal: Thanks for the comments of all reviewers. Considering the limitation of character count, we provide the table of the main differences among our main results in "global response" and upload a PDF including this table. As mentioned in the **A3** (the Rebuttal to Reviewer Xsnq), we have added the table at the head of Appendix C in our new manuscript.
Table. The main differences among our main results. (Note that, these line numbers in this table are in our submission version.)
| | | | |
|:----:|:----:|:----:|:----:|
|**Results**|**Generalization**|**Optimization**|
|**Theorem 2**|**Co-coercivity (line 449)**|—|
|**Theorem 3**|**Almost co-coercivity (line 472)**|—|
|**Theorem 4**|**Unknown gradient estimation (lines 513-518)**|**Some decompositions to $\tilde{\nabla} f$ (line 546)**|
|**Corollary 1**|**Unknown gradient estimation (lines 563-568)**|**Some decompositions to $\tilde{\nabla} g$ (line 596)**|
|**Corollary 2**|**Unknown gradient estimation (lines 613-618)**|**Combination of Theorem 4 and Corollary 1**|
| | | | |
Pdf: /pdf/6fa4ef231bb038bd8c6037393a1504279b0d175f.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
FlexSBDD: Structure-Based Drug Design with Flexible Protein Modeling | Accept (poster) | Summary: This paper presents a new deep generative model, FlexSBDD, which advances the field of structure-based drug design (SBDD) by accounting for the flexibility of proteins when generating 3D ligand molecules. This approach addresses the shortcomings of traditional SBDD methods that assume proteins are rigid, leading to less effective drug interactions. To obtain apo-holo data pairs, this paper utilizes Apobind to generate apo strutures from known holo structures. This paper adopts advanced flow matching to learn the apo-holo dynamics of protein and generate 3D molecules in surprising 20 steps. Experiments on CrossDocked and Bingding MOAD show that FlexSBDD can generate drug-like molecules with highest affinity compared to auto-regressive and diffusion-based baselines.
Strengths: 1. For task, this paper focuses on protein flexibility, which is a crutial shortcoming of current SBDD methods. From the data aspect, this paper utilizes Apobind to generate apo strutures from known holo structures. From the evaluation aspect, this paper presents a case study which analyzes the predicted structure in 5.4. For normal SBDD evaluation, this paper includes 2 datasets and adopts Glide scores.
2. For methodology, this paper respects characteristics of different modalities, and utilizes continuous, Riemanian, torus, and discrete flow matching for different modalities. Benefits from optimal transport path of flow matching, this paper achieves quite surprising sampling efficiency.
3. For technical innovation, this paper proposes, 1. data augmentation, 2. a good composition of geometric NN modules, both of which boost the performance.
4. The results look good. Congratulations, this paper makes it work.
Weaknesses: This paper focuses on protein flexibility for SBDD. However, the evaluation (except for a case study in 5.4) mainly follows normal SBDD, while the readers may concern more about: 1. Quality of Apobind-generated apo structures and model-generated holo structures, which is not fully discussed and evaluated, 2. Why protein flexibility can help improve affinity for fixed holo structures? Thus, the reviewer concerns the thoroughness and presentation of the experiments. And the reviewer encourages the authors to elaborate on further evaluation and more discussion specific to protein-flexible SBDD.
I will raise my score if more flexible SBDD evaluations and discussions are presented.
Technical Quality: 3
Clarity: 2
Questions for Authors: 1. The reviewer believes FlexSBDD may generate different holo structures than ground-truth (GT). 1. The generated molecules should have higher affinities on generated holos than GT holos, right? 2. How different are the generated and GT holos? 3. If the GT ligand is provided, can the model generate accurate GT holos?
2. In Table 2, FlexSBDD shows much better bond length distribution than other baselines. How is bond generated? And why is it so good? And why is bond angles (Table 5) not much better?
3. In Table 3, why data augmentation boosts so much (and the most) on performance? It's hard to understand the connection between apo structures and affinity in fixed holos.
4. Affinity and other metrics have strong correlation with atom numbers. Please include atom numbers in Table 1 for fairer comparison.
I will raise my score if the questions are properly answered.
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: see weakness above
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the appreciation and valuable comments! We hope our following responses can properly address your questions.
**Comment 1**: Quality of Apobind-generated apo structures and model-generated holo structures, which is not fully discussed and evaluated
**Response 1**: Thanks for the question! We use self-consistency TM (scTM) scores to evaluate the quality of apo/holo structures. The average scTM of apo structures from Apobind is 0.957 and the model generated holo is 0.964. We have a brief discussion in lines 276-282. We will make the discussion clearer in the revised paper.
**Comment 2**: Why protein flexibility can help improve affinity for fixed holo structures? Thus, the reviewer concerns the thoroughness and presentation of the experiments. And the reviewer encourages the authors to elaborate on further evaluation and more discussion specific to protein-flexible SBDD.
**Response 2**: Thanks for the valuable comment! According to the induced fit theory in biochemistry, proteins are flexible structures that undergo structural changes upon ligand binding, leading to enhanced interactions and binding affinity. Technically, modeling protein flexibility can help reduce steric clashes and adjust structure to establish more protein-ligand interactions such as hydrogen bonds to improve affinity. During rebuttal, we have done a series of further evaluations **including DFG-in/out structure prediction (uploaded pdf), distinct conformational search with respect to time/space scale (response 3), and comparing FlexSBDD with SBDD+flexible docking (response 4 to reviewer jwrm)**. These flexible SBDD specific evaluations shows the effectiveness of FlexSBDD. We will include these results and discussions into our final version.
**Comment 3**: The reviewer believes FlexSBDD may generate different holo structures than ground-truth (GT). 1. The generated molecules should have higher affinities on generated holos than GT holos, right? 2. How different are the generated and GT holos? 3. If the GT ligand is provided, can the model generate accurate GT holos?
**Response 3**: Thanks for the insightful comments!
**Firstly**, it is true that the generated molecules have higher affinities on generated holos than GT holos. For example, the Avg. Vina Dock on generated holos is -9.12 while on GT holos is only -8.78. **Secondly**, the average RMSD between the generated and GT holos is 0.895, indicating the generated and GT holos are largely aligned.
**Finally**, if the GT ligand is provided, FlexSBDD can generate accurate holos close to GT holos.
We perform a comprehensive quantitative study on proteins with DFG-in/out confirmations to evaluate whether FlexSBDD can perform conformational searches on proteins with substantial structure variability. As shown in the uploaded pdf, the majority of the predicted protein structures show a lower relative pocket RMSD (better) compared to the initial ones, verifying FlexSBDD’s strong capability for ligand-specific conformational search.
**Comment 4**: In Table 2, FlexSBDD shows much better bond length distribution than other baselines. How is bond generated? And why is it so good? And why is bond angles (Table 5) not much better?
**Response 4**: In FlexSBDD, the bonds are generated with post-processing similar to TargetDiff. The powerful flow matching framework and well-designed model architecture contribute to the better bond length distribution. As for the bond angles, FlexSBDD still achieves competitive performance. The advantage over baselines methods may not be that much because bond angles are more complicated and FlexSBDD does not explicitly learn the bond angles (the bond representation are learned and updated in FlexSBDD). It will be our future work to design more powerful model architecture to learn the representations of bond angles.
---
Rebuttal 2:
Title: Further Response to Reviewer 7REV
Comment: **Comment 5**: In Table 3, why data augmentation boosts so much (and the most) on performance? It's hard to understand the connection between apo structures and affinity in fixed holos.
**Response 5**: Thanks for the insightful comment! Data augmentation plays an important role in FlexSBDD. As indicated in lines 186-195 of the submitted paper, we take apo-holo structure pairs for training. In each training iteration, we sample the apo structures $\mathcal{C}_0$ and holo-structures $\mathcal{C}_1$ and interpolate to obtain $\mathcal{C}_t$, i.e., FlexSBDD is supervised to learn the protein structural changes and the apo data play critical roles. However, existing apo-holo data pairs from Apobind are quite limiting (~10K data points) and directly training FlexSBDD on it leads to severe overfitting according to our experiments. The low-quality of the generated protein structure would directly lead to inferior Vina scores because of steric clashes and few protein-ligand interactions. **Therefore, we propose to use data augmentation to increase the training dataset size, cover more diverse apo-holo transition paths, and boost FlexSBDD’s generalization capability.** The ablation studies show that the data augmentation contributes a lot to FlexSBDD’s performance.
We will include more detailed discussions and analysis of the role of data augmentation in our revised paper.
**Comment 6**: Affinity and other metrics have strong correlation with atom numbers. Please include atom numbers in Table 1 for fairer comparison.
**Response 6**: Thanks for the constructive suggestion! In FlexSBDD, the number of ligand atoms is sampled from the reference dataset distribution. We report the average number of ligand atoms below. Generally, the average num of atoms of FlexSBDD is comparable to the reference and other baseline methods. DecompDiff has the most number of atoms. We will include these statistics into the final version.
| Methods | Reference | LiGAN | AR | Pocket2Mol | TargetDiff | DecompDiff | FlexSBDD|
|------------|--------------------|------|------------------------|------|---------------|------|------|
| Avg. Num of Atoms | 22.8 | 19.9| 17.7 | 24.2| 29.4 | 23.0 | | Summary: In this research paper, the authors introduce FlexSBDD, a novel model that employs flow matching for the generation of flexible protein-based molecules. Initially, the model sample a noisy ligand based on an empirical distribution. Subsequently, it conducts flow matching, performing on both geometric characteristics and atomic types. Through comprehensive experimentation, FlexSBDD has demonstrated superior performance compared to previous methodologies, including TargetDiff, Pocket2Mol, and DecomDiff.
Strengths: 1. FlexSBDD explores a new avenue, flexible structure-based conditional molecular generation, which is quite novel.
2. FlexSBDD demonstrates superior performance through well-recognized benchmarking and ablation studies, adhering to rigorous research practices.
3. FlexSBDD discussions have many chemical insights, which is commendable.
Weaknesses: 1. As discussed in the Dynamic-Bind, many flexible aspects of protein residues involve changes in backbone atoms, such as transitions from DFG-in to DFG-out conformations. The current implementation of FlexSBDD does not demonstrate its capability for conformational search in more rigorous settings, such as when the apo pocket exhibits substantial dissimilarity from the holo pocket.
2. Although FlexSBDD represents a commendable effort, it fails to address a fundamental issue in flexible-pocket generation: evolving the appropriate atomic number through the generation process. FlexSBDD initializes ligands from an empirical distribution without considering pocket structures, resulting in a somewhat stochastic generation process: sampling a large number of ligands may artificially expand the pocket, leading to an increase in biased docking scores. The illustration provided with the 1a2g example supports this observation.
3. The model architecture of FlexSBDD, which includes flow matching on various geometries, has been previously implemented in other works such as PPFlow, diminishing the architectural novelty of FlexSBDD.
4. The illustrations in Figure 4, including the structures from 4yhj and 1fmc, display unusual bond topologies in molecules generated by FlexSBDD, such as a ring containing two double bonds. This issue may stem from an oversight in bond modeling within the architecture.
Technical Quality: 2
Clarity: 3
Questions for Authors: 1. In the evaluation using Vina dock metrics, it remains unclear which protein structures were utilized for benchmarking. Could you specify whether the benchmarks employed the initial holo structures or those updated by FlexSBDD?
2. Could you elaborate on the statement made in lines 216-217? “We note that it is fair to compare FlexSBDD with other baseline methods as the additional 217 apo structures contain no ligand molecules and cannot be used by baselines for training.” I do not fully understand the points here.
3. The code provided with the submission appears to lack both training and inference components, which undermines the credibility of the reported results. Could you address this omission?
4. Regarding the prediction of side-chain conformations, it appears that the analysis is limited to the mean squared error (MSE) of chi angles without considering the orientation within the residue frame prediction. Could you discuss the rationale behind this methodological choice?
Confidence: 5
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: The limitations and broader impacts are well discussed in Appendix E of the main paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the appreciation and valuable comments!
**Comment 1**: As discussed in the Dynamic-Bind, many flexible aspects of protein residues involve changes in backbone atoms, such as transitions from DFG-in to DFG-out conformations. The current implementation of FlexSBDD does not demonstrate its capability for conformational search in more rigorous settings, such as when the apo pocket exhibits substantial dissimilarity from the holo pocket.
**Response 1**: Thanks for the insightful comment! FlexSBDD has the capability to model both the backbone and the sidechain structural changes. During rebuttal, we perform a comprehensive quantitative study on proteins with DFG-in/out confirmations to evaluate whether FlexSBDD can perform conformational searches on proteins with substantial structure variability. **As shown in the uploaded pdf (https://openreview.net/forum?id=4AB54h21qG¬eId=Jo1MWV179v), the majority of the predicted protein structures show a lower pocket RMSD compared to the initial ones**, verifying FlexSBDD’s strong capability for conformational search.
**Comment 2**: Although FlexSBDD represents a commendable effort, it fails to address a fundamental issue in flexible-pocket generation: evolving the appropriate atomic number through the generation process. FlexSBDD initializes ligands from an empirical distribution without considering pocket structures, resulting in a somewhat stochastic generation process: sampling a large number of ligands may artificially expand the pocket, leading to an increase in biased docking scores. The illustration provided with the 1a2g example supports this observation.
**Response 2**: Thanks for the insightful comment! In FlexSBDD, the number of ligand atoms is sampled from the reference dataset distribution.
According to related works [1-2], the flexible pocket can adaptively adjust structures to accommodate ligand molecules with different sizes. Therefore, sampling ligand molecules with different sizes help explore different binding modes, e.g., discover cryptic pockets. Pre-determining the ligand atom numbers may restrict the diversity and novelty of the generated molecules.
We also report the average number of ligand atoms below. Generally, the average num of atoms of FlexSBDD is comparable to the reference and other baseline methods. DecompDiff has more Avg. Num of Atoms than FlexSBDD.
As for the case study examples, we select the best generated ligand molecules for each target protein and FlexSBDD and DecompDiff have roughly the same number of atoms in most of cases.
| Methods | Reference | LiGAN | AR | Pocket2Mol | TargetDiff | DecompDiff | FlexSBDD|
|------------|--------------------|------|------------------------|------|---------------|------|------|
| Avg. Num of Atoms | 22.8 | 19.9| 17.7 | 24.2| 29.4 | 23.0 |
There are also some other promising techniques such training neural networks to predict the number of ligand atoms. We will include the discussions in our revised paper.
[1] Lu W, Zhang J, Huang W, et al. DynamicBind: Predicting ligand-specific protein-ligand complex structure with a deep equivariant generative model[J]. Nature Communications, 2024, 15(1): 1071.
[2] Qiao Z, Nie W, Vahdat A, et al. State-specific protein–ligand complex structure prediction with a multiscale deep generative model[J]. Nature Machine Intelligence, 2024, 6(2): 195-208.
**Comment 3**: The model architecture of FlexSBDD, which includes flow matching on various geometries, has been previously implemented in other works such as PPFlow, diminishing the architectural novelty of FlexSBDD.
**Response 3**: Thanks for the comment! Both FlexSBDD and PPFlow are based on Riemannian Flow Matching [3,4] proposed by previous works. Different from PPFlow that focus on peptide design, we propose a flow-matching-based generative model FlexSBDD, capable of modeling protein flexibility while generating de novo 3D ligand molecules. To work well on the challenging flexible SBDD scenario, FlexSBDD has unique designs on sidechain flow matching, scalar-vector dual representation architecture, and training with data augmentation.
We will cite and discuss more related works in our revised paper.
[3] Yaron Lipman, Ricky TQ Chen, Heli Ben-Hamu, Maximilian Nickel, and Matt Le. Flow matching for generative modeling. arXiv preprint arXiv:2210.02747, 2022.
[4] Chen R T Q, Lipman Y. Riemannian flow matching on general geometries[J]. arXiv preprint arXiv:2302.03660, 2023.
**Comment 4**: The illustrations in Figure 4, including the structures from 4yhj and 1fmc, display unusual bond topologies in molecules generated by FlexSBDD, such as a ring containing two double bonds. This issue may stem from an oversight in bond modeling within the architecture.
**Response 4**: In FlexSBDD, the bonds are modeled with scalar/vector edge representations in the protein-ligand graph. The details of graph construction, feature initialization, message passing, and feature/structure update are included in Appendix D. We will provide more detailed and clear description in our revised paper. Actually, rings containing two double bonds are common in drug molecules, such as Metronidazole [5], Omeprazole [6], Pyrantel Pamoate [7], and Celecoxib [8]. The sub-structure analysis (Sec. 5.3) further validates that FlexSBDD generates valid bond distance/angle distributions.
In the future, the bond modeling can be further improved with e.g., generating the bond types along with FlexSBDD sampling.
[5] FINEGOLD S M. Metronidazole[J]. Annals of Internal Medicine, 1980, 93(4): 585-587.
[6] Maton P N. Omeprazole[J]. New England Journal of Medicine, 1991, 324(14): 965-975.
[7] Rim H J, Won C Y, Lee S I. Anthelmintic effect of oxantel pamoate and pyrantel pamoate[J]. The Korean Journal of Parasitology, 1975, 13(2): 97-01.
[8] Puljak L, Marin A, Vrdoljak D, et al. Celecoxib for osteoarthritis[J]. Cochrane Database of Systematic Reviews, 2017 (5).
---
Rebuttal 2:
Title: Further Response to Reviewer Y45d
Comment: **Comment 5**: In the evaluation using Vina dock metrics, it remains unclear which protein structures were utilized for benchmarking. Could you specify whether the benchmarks employed the initial holo structures or those updated by FlexSBDD?
**Response 5**: For the Vina scores of FlexSBDD, we use the updated protein structure. As for the other baselines, we follow previous works to use the target structure from the test set for evaluation.
**Comment 6**: Could you elaborate on the statement made in lines 216-217? “We note that it is fair to compare FlexSBDD with other baseline methods as the additional 217 apo structures contain no ligand molecules and cannot be used by baselines for training.” I do not fully understand the points here.
**Response 6**: Thanks for the detailed question! In FlexSBDD, we associate holo structures from training dataset (CrossDocked and Binding MOAD) with apo conformations from Apobind [3] to create apo-holo pairs for training. We want to note that the additional Apobind dataset does not contain protein-ligand structures (i.e., only protein structures) and cannot be used by baseline methods for training. The Apobind dataset employed by FlexSBDD will not bring data leakage or additional advantage. Therefore, it is fair to compare FlexSBDD with baseline methods. We will make the statement clearer in our revised paper.
**Comment 7**: The code provided with the submission appears to lack both training and inference components, which undermines the credibility of the reported results. Could you address this omission?
**Response 7**: Thanks for the valuable comment! We have uploaded the training and inference codes. We will open-source all the codes upon paper acceptance.
**Comment 8**: Regarding the prediction of side-chain conformations, it appears that the analysis is limited to the mean squared error (MSE) of chi angles without considering the orientation within the residue frame prediction. Could you discuss the rationale behind this methodological choice?
**Response 8**: Thanks for the detailed comment! We use the mean squared error (MSE) of chi angles to evaluate the prediction of sidechain conformations following previous works [9-11]. Generally, lower MSE indicate more precise sidechain structure prediction. In table 6 of the paper, we can observe that FlexSBDD achieves better performance in generating valid sidechain structures.
During rebuttal, we perform additional analysis of the side chain prediction. For example, we follow DynamicBind to conduct a comprehensive analysis six distinct conformational changes across the picosecond level to millisecond level (molecular dynamics), each exemplified by a case from PDBbind. In the following table, we report Δpocket RMSD (including side chain and backbone) of DynamicBind and FlexSBDD, which measures the relative decrease in pocket RMSD (crystal structure as reference) compared with the AlphaFold structures. **A negative Δpocket RMSD indicates that the predicted aligns more closely with the crystal structure compared with the AlphaFold prediction.** We observe that FlexSBDD achieves competitive performance that improves the AlphaFold prediction to have lower pocket RMSD, even though it is not specifically designed for dynamic docking.
| Methods | 6QGF | 6PGO | 6N8X | 6UWV | 6ROT | 6S9X |
|------------|--------------------|------|------------------------|------|---------------|------|
| DynamicBind | -0.669 | -1.140| -2.297 | -0.465| -2.327 | -5.245 |
| FlexSBDD| -0.680 | -0.976| -1.159 | -0.504| -1.148 | -3.083 |
[9] Zhang Y, Zhang Z, Zhong B, et al. Diffpack: A torsional diffusion model for autoregressive protein side-chain packing. Advances in Neural Information Processing Systems, 2023.
[10] McPartlon M, Xu J. An end-to-end deep learning method for protein side-chain packing and inverse folding[J]. Proceedings of the National Academy of Sciences, 2023, 120(23): e2216438120.
[11] Dong T, Yang Z, Zhou J, et al. Equivariant flexible modeling of the protein–ligand binding pose with geometric deep learning[J]. Journal of Chemical Theory and Computation, 2023, 19(22): 8446-8459. | Summary: The author identified an important missing factor in current SBDD modeling, i.e. protein structural change upon binding, and proposed an E(3)-equivariant flow matching framework named FlexSBDD that jointly models protein flexibility and molecule generation. This paper augmented apo structures for each holo protein in the training set via structure relaxation, Rosetta repacking and random perturbations. FlexSBDD achieves SOTA binding affinities, QED and diversity benchmarked on CrossDocked2020 and Binding MOAD dataset, together with fewer clashes and more HB donors and acceptors.
Strengths: - This paper is well-motivated and generally easy to follow (except for model architecture and training).
- The authors raise an important question for SBDD, i.e. holo-structures are an induced fit for ligand molecules, which is a novel contribution.
- The results on two benchmarks are convincing and highlight the importance of modeling protein flexibility.
Weaknesses: - Typo: (Line 90-91) However, these methods can hardly [be] extended to the challenging de novo ligand generation, leaving it an unsolved problem.
- Typo: (Line 591) Protein Sturcture Analysis => Protein Structure Analysis
- Ablation studies suggest that the biggest performance gain come from data augmentation. However, I feel that augmentation shouldn't matter that much, since the whole pipeline only implicitly utilizes the protein structural change and the explicit outcome is the generated ligand itself. Could the authors explain the role of data augmentation in your method?
- Since apo (unbound) and holo (bound) state proteins are of great focus in this paper, it seems to me that more reasonable metrics for protein analysis would be based on some molecular dynamics, instead of scTM or something that are not guaranteed to output holo proteins for given ligand molecules.
Technical Quality: 3
Clarity: 3
Questions for Authors: - How would FlexSBDD behave if applied to an inpainting scenario (with ground truth holo-protein)?
- For a fair comparison, I would recommend the authors to try some flexible docking tools that also take the protein flexibility into account, and see how the performances of FlexSBDD and other baselines change.
- The authors could elaborate a bit more on the evaluation of protein structures. For example, why is FlexSBDD superior to SOTA protein-ligand complex structure prediction method? Under what setting are they being evaluated and compared?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for valuable comments and appreciation!
**Comment 1**: Ablation studies suggest that the biggest performance gain come from data augmentation. However, I feel that augmentation shouldn't matter that much, since the whole pipeline only implicitly utilizes the protein structural change and the explicit outcome is the generated ligand itself. Could the authors explain the role of data augmentation in your method?
**Response 1**: Thanks for the insightful comment! Data augmentation plays important roles in FlexSBDD and the protein structural changes are considered explicitly in the whole pipeline. As indicated in lines 186-195 of the submitted paper, we take apo-holo structure pairs for training. In each training iteration, we sample the apo structures $\mathcal{C}_0$ and holo-structures $\mathcal{C}_1$ and interpolate to obtain $\mathcal{C}_t$, i.e., FlexSBDD is supervised to learn the protein structural changes and the data play critical roles. However, existing apo-holo data pairs from Apobind are quite limiting (~10K data points) and directly training FlexSBDD on it leads to severe overfitting according to our experiments. The low-quality of the generated protein structure would directly lead to inferior Vina scores because of steric clashes and few protein-ligand interactions. **Therefore, we propose to use data augmentation to increase the training dataset size, cover more apo-holo transition paths, and boost FlexSBDD’s generalization capability.** The ablation studies show that the data augmentation contributes a lot to FlexSBDD’s performance.
We will include more detailed discussions and analysis of the role of data augmentation in our revised paper.
**Comment 2**: Since apo (unbound) and holo (bound) state proteins are of great focus in this paper, it seems to me that more reasonable metrics for protein analysis would be based on some molecular dynamics, instead of scTM or something that are not guaranteed to output holo proteins for given ligand molecules.
**Response 2**: Thanks for the constructive suggestion! We follow DynamicBind to conduct a comprehensive analysis six distinct conformational changes **across the picosecond level to millisecond level (molecular dynamics)**, each exemplified by a case from PDBbind. In the following table, we report Δpocket RMSD of DynamicBind and FlexSBDD, which measures the relative decrease in pocket RMSD (crystal structure as reference) compared with the AlphaFold structures. **A negative Δpocket RMSD indicates that the predicted aligns more closely with the crystal structure compared with the AlphaFold prediction.** We observe that FlexSBDD achieves competitive performance although it is not specifically designed for dynamic docking.
| Methods | 6QGF | 6PGO | 6N8X | 6UWV | 6ROT | 6S9X |
|------------|--------------------|------|------------------------|------|---------------|------|
| DynamicBind | -0.669 | -1.140| -2.297 | -0.465| -2.327 | -5.245 |
| FlexSBDD| -0.680 | -0.976| -1.159 | -0.504| -1.148 | -3.083 |
**Comment 3**: How would FlexSBDD behave if applied to an inpainting scenario (with ground truth holo-protein)?
**Response 3**: Thanks for the question! We agree it is a good comparison to apply FlexSBDD to the inpainting scenario (with ground truth holo-protein), which is the same setting with the baseline methods (LiGAN, AR, Pocket2Mol, TargetDiff, and DecompDiff). We show the results below and observe that FlexSBDD still behaves well in the inpaint setting, showing the generalizability and flexibility of FlexSBDD’s architecture.
| Methods | Vina Score (↓) Avg. | Med. | Vina Min (↓) Avg. | Med. | Vina Dock (↓) Avg. | Med. | High Affinity (↑) Avg. | Med. | QED (↑) Avg. | Med. | SA (↑) Avg. | Med. | Diversity (↑) Avg. | Med. |
|------------|---------------------|------|--------------------|------|--------------------|------|------------------------|------|---------------|------|-------------|------|--------------------|------|
| **Reference** | -6.36 | -6.46| -6.71 | -6.49| -7.45 | -7.26| - | - | 0.48 | 0.47 | 0.73 | 0.74 | - | - |
| LiGAN | - | - | - | - | -6.33 | -6.20| 21.1% | 11.1%| 0.39 | 0.39 | 0.59 | 0.57 | 0.66 | 0.67 |
| AR | *-5.75* | -5.64| -6.18 | -5.88| -6.75 | -6.62| 37.9% | 31.0%| 0.51 | 0.50 | {0.63} |{0.63}| 0.70 | 0.70 |
| Pocket2Mol | -5.14 | -4.70| -6.42 | -5.82| -7.15 | -6.79| 48.4% | 51.0%| **0.56** | **0.57** | **0.74** | **0.75** | 0.69 | *0.71* |
| TargetDiff | -5.47 | *-6.30*| -6.64 | -6.86| -7.80 | -7.91| 58.1% | 59.1%| 0.48 | 0.48 | 0.58 | 0.58 | 0.72 | *0.71* |
| DecompDiff | -5.67 | -6.04 | *-7.04* | *-6.91*| *-8.39* | *-8.43*| *64.4%* | *71.0%*| 0.45 | 0.43 | 0.61 | 0.60 | 0.68 | 0.68 |
| FlexSBDD (inpaint)| **-6.69** | **-7.16**| **-8.24** | **-8.50**| **-9.06** | **-9.12**| **75.9%** | **82.1%**| **0.58** | **0.58** | *0.70* | *0.71* | **0.74** | **0.72** |
- **Bold**: Best results
- *Italic*: Second best
---
Rebuttal 2:
Title: Further Response to Reviewer jwrm
Comment: **Comment 4**: For a fair comparison, I would recommend the authors to try some flexible docking tools that also take the protein flexibility into account, and see how the performances of FlexSBDD and other baselines change.
**Response 4**: Thanks for the valuable suggestion! In the following table, **we combine the top-2 SBDD baselines with DynamicBind [51], the flexible docking to compare with FlexSBDD.** Specifically, after generating the ligand molecules with TargetDiff/DecompDiff, we further dock the ligand to the target protein with DynamicBind. For fair comparison, we show the results of Vina Dock, QED, SA, and Diversity below. We observe that applying flexible docking as post-processing indeed improve the Vina Dock of baselines. FlexSBDD still achieves the best results as an end-to-end generative model for de novo ligand generation while adjusting protein structures. We will include these new results and discussions in our revised paper.
| Methods | Vina Dock (↓) Avg. | Med. | High Affinity (↑) Avg. | Med. | QED (↑) Avg. | Med. | SA (↑) Avg. | Med. | Diversity (↑) Avg. | Med. |
|------------|--------------------|------|------------------------|------|---------------|------|-------------|------|--------------------|------|
| TargetDiff | -8.17 | -8.25| 62.3% | 63.0%| 0.48 | 0.48 | 0.58 | 0.58 | 0.72 | *0.71* |
| DecompDiff | *-8.89* | *-8.97*| *69.1%* | *74.5%*| 0.45 | 0.43 | 0.61 | 0.60 | 0.68 | 0.68 |
| **FlexSBDD**| **-9.12** | **-9.25**| **78.5%** | **84.2%**| **0.58** | **0.59** | *0.69* | *0.73* | **0.76** | **0.75** |
- **Bold**: Best results
- *Italic*: Second best
[51] Wei Lu, Ji-Xian Zhang, Weifeng Huang, Ziqiao Zhang, Xiangyu Jia, Zhenyu Wang, Leilei Shi, Chengtao Li, Peter Wolynes, and Shuangjia Zheng. Dynamicbind: Predicting ligand-specific protein-ligand complex structure with a deep equivariant generative model. 2023.
**Comment 5**: The authors could elaborate a bit more on the evaluation of protein structures. For example, why is FlexSBDD superior to SOTA protein-ligand complex structure prediction method? Under what setting are they being evaluated and compared?
**Response 5**: Thanks for the valuable suggestion! In Appendix B.3, we evaluate the sidechain prediction of FlexSBDD and compare it with SOTA protein-ligand complex structure prediction method NeuralPlexer. **We use the inpaint setting for experiments: the NeuralPLexer model is asked to jointly predict the structure for a cropped spherical region within 6.0 Å of any ligand atom by inpainting all the amino acid and ligand atomic coordinates from scratch; FlexSBDD generate the ligand structure and updates the apo structure to holo.** To evaluate the validity of sidechain structure, we compute the Mean Absolute Error (MAE) of sidechain angles (degrees) of the inpainting region. In Table 6, we observed that FlexSBDD achieves better results on MAE. This could be attributed to the advance flow matching framework, scalar-vector dual representation architecture, and data augmentation strategy. We will include more details of the experimental settings and discussions in the revised version.
**Comment 6**: Typos: (Line 90-91) However, these methods can hardly [be] extended to the challenging de novo ligand generation, leaving it an unsolved problem. (Line 591) Protein Sturcture Analysis => Protein Structure Analysis
**Response 6**: Thanks for the detailed comments! We have corrected our typos and will submit the updated paper in the final version.
---
Rebuttal Comment 2.1:
Comment: Thanks for the authors' detailed response. It addressed all my concerns. I have raised my score to 7 in hopes that this paper gets accepted.
---
Reply to Comment 2.1.1:
Title: Thanks for your support!
Comment: Dear Reviewer,
Thanks for your support! We are glad that our response addressed all your concerns.
Bests,
Authors | Summary: In this paper, the authors focus on the flexible protein setting in the structure-based drug design task. They propose a method named FlexSBDD, which is based on flow matching and utilizes E(3)-equivariant neural networks. The experiments show the advantages of the proposed FlexSBDD.
Strengths: 1. This paper focuses on an interesting setting where the proteins are flexible.
2. The presentation is good, and the paper is well-organized.
Weaknesses: 1. The method does not explicitly model the interaction between the ligand and the protein, especially the pocket. The authors might consider building an external interaction graph between the residues in the pocket and the atoms of the ligand.
2. I would like to see more focused results on the binding interface.
Technical Quality: 3
Clarity: 3
Questions for Authors: See the weaknesses. Additionally, I noticed a work [1] that is closely related to this paper, also employing flow matching. However, the authors have not cited or discussed the differences between their work and this one.
**Minor Concern:**
Line 240 "Table.": Since "Table" is written in full without abbreviation, there is no need to add a period after "Table". This issue occurs in multiple places.
**Reference:**
[1] Schneuing, Arne, et al. "Towards Structure-based Drug Design with Protein Flexibility." ICLR 2024 Workshop on Generative and Experimental Perspectives for Biomolecular Design.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors have discussed the limitations of FlexSBDD.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the valuable comments!
**Comment 1**: The method does not explicitly model the interaction between the ligand and the protein, especially the pocket. The authors might consider building an external interaction graph between the residues in the pocket and the atoms of the ligand.
**Response1**: As indicated in lines 666-668 of the submitted paper, in FlexSBDD, we represent the protein pocket-ligand complex as a k-nearest neighbor (KNN) graph in which nodes represent protein residues or ligand atoms and each node is connected to its k-nearest neighbors. Therefore, the interactions between the ligand and the protein are considered as edge representations in the constructed protein-ligand graph.
We can also incorporate an additional interaction graph to emphasize protein-ligand interactions. For instance, we may train interaction/binding affinity predictors based on protein-ligand graphs and use these trained predictors to guide the sampling process of FlexSBDD, similar to the approach described in [1]. These only require minor modifications to the FlexSBDD framework. We will include these additional discussions in our revised paper.
[1] Qian H, Huang W, Tu S, et al. KGDiff: towards explainable target-aware molecule generation with knowledge guidance[J]. Briefings in Bioinformatics, 2024, 25(1): bbad435.
**Comment 2**: I would like to see more focused results on the binding interface.
**Response 2**: Thanks for the valuable comments! Besides the common benchmark metrics adopted by previous SBDD papers such as Vina scores, QED, and SA, we also focus on investigating the interactions on the binding interface. **In Figure 3, we consider steric clashes, hydrogen bonds, and hydrophobic interactions at the protein ligand binding interface.** Steric clashes happens when two neutral atoms come into closer proximity than the combined extent of their van der Waals radii, indicating energetically unfavorable and physically unrealistic structures. Hydrogen bonds (HBs) and Hydrophobic interactions are polar interactions that significantly contribute to the binding affinity between proteins and ligands.
We observe that FlexSBDD can generate ligands introducing fewer clashes and more favorable interactions. For example, the average steric clashes for DecompDiff (baseline) and FlexSBDD are 6.43 and 1.39 respectively. The average number of HB Acceptors for DecompDiff (baseline) and FlexSBDD are 1.18 and 1.96 respectively. These results indicate that FlexSBDD can adaptively adjust protein and ligand conformations to reduce clashes and increase favorable protein-ligand interactions.
These analysis are included in lines 257-268 of the paper. We will include more comprehensive analysis in the revised paper.
**Comment 3**: I noticed a work [2] that is closely related to this paper, also employing flow matching. However, the authors have not cited or discussed the differences between their work and this one.
[2] Schneuing, Arne, et al. "Towards Structure-based Drug Design with Protein Flexibility." ICLR 2024 Workshop on Generative and Experimental Perspectives for Biomolecular Design.
**Response 3**: Thanks for mentioning the related paper. The ICLR24 workshop paper represents a pioneering work to consider structure-based drug design with protein flexibility. However, it only considers side chain flexibility while keeping backbone atoms fixed. The authors said the full-fledged induced-fit requires backbone movement modeling. Moreover, it only trains on protein-ligand binding complex structures and did not consider apo-holo structure pairs to learn structural transitions. Therefore, the performance of FlexFlow from the ICLR24 workshop is quite limited, even worse than its counterpart without flexible sidechain modeling.
In comparison, FlexSBDD is able to model both sidechain and backbone flexibility of the protein while generating de novo ligand molecules. To better learn the transition between apo and holo structures, we associate data from the training set with apo structures from Apobind. We also create synthetic apo conformations with OpenMM relaxation and Rosetta repacking. As for the performance, FlexSBDD not only achieves state-of-the-art performance on benchmark datasets (e.g., -9.12 Avg. Vina Dock score), but also learns to adjust the protein structure to increase favorable interactions (e.g., 1.96 Avg. Hydrogen bond acceptors) and decrease steric clashes.
We will cite and include the discussions in our revised paper.
**Comment 4**: Line 240 "Table.": Since "Table" is written in full without abbreviation, there is no need to add a period after "Table". This issue occurs in multiple places.
**Response 4**: Thanks for the valuable comments! We have corrected the typos in our paper and would like to update the submitted paper in the final version.
---
Rebuttal Comment 1.1:
Comment: Thanks for your response. I will keep my original score.
---
Reply to Comment 1.1.1:
Title: Thanks for your response!
Comment: Dear Reviewer,
Thanks for your valuable suggestions and support! We will include the above discussions in our revised paper. Thanks!
Bests,
Authors | Rebuttal 1:
Rebuttal: Thanks for the insightful comments and appreciation from all the reviewers!
FlexSBDD has the capability to model both the backbone and the sidechain structural changes. During rebuttal, we perform a comprehensive quantitative study on proteins with DFG-in/out confirmations to evaluate whether FlexSBDD can perform conformational searches on proteins with substantial structure variability. As shown in the uploaded pdf, the majority of the predicted protein structures show a lower relative pocket RMSD (better) compared to the initial ones, verifying FlexSBDD’s strong capability for ligand-specific conformational search.
As for the specific questions, our responses are included in the following paragraphs. Thanks!
Pdf: /pdf/0510401fc5cc0583ebec21417f5aed6960eff04a.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Slack-Free Spiking Neural Network Formulation for Hypergraph Minimum Vertex Cover | Accept (poster) | Summary: Traditional SNN methods for combinatorial optimization necessitate the use of penalty terms with slack variables to maintain feasibility constraints. The paper introduces a novel Spiking Neural Network (SNN) formulation designed to solve the Hypergraph Minimum Vertex Cover (HMVC) problem without requiring slack variables. The proposed SF-HMVC replaces slack variables with additional spiking neurons that check and correct constraints, facilitating convergence to feasible solutions.
Strengths: 1. Innovative approach to solving HMVC without slack variables, reducing the search space and improving solver effectiveness.
2. Consistently high-quality solutions across multiple problem instances.
3. The paper structure is well-organized.
Weaknesses: 1. The scale of experiments are relatively small and simple.
2. Lack of comparison with other solving methods, e.g. Gurobi and D-Wave.
3. There are some typos. E.g. in line 102, *whot* -> *who*.
Technical Quality: 2
Clarity: 3
Questions for Authors: 1. How scalable is the proposed SNN method with future advancements in neuromorphic hardware?
2. Can the approach be generalized to other combinatorial optimization problems beyond HMVC?
Confidence: 4
Soundness: 2
Presentation: 3
Contribution: 3
Limitations: 1. The neuromorphic hardware capacity is currently limited, restricting the scale of problem instances that can be tested.
2. There is a lack of public HMVC benchmarks, complicating the evaluation of the method's generalizability.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for the feedback.
1. Despite the relatively small problem instances that can be solved by the neuromorphic hardware available (Loihi 2), we would like to point out that the selected problem instances were already sufficient to convincingly illustrate the benefit for the proposed approach. Particularly for HMVC, the proposed SNN (SF-HMVC) could provide good results on Loihi 2, while the baseline QUBO-based SNN either returned infeasible results or could not be executed on the hardware due to exceeding resource limits. See Tables 3 and 4.
2. Note that we have compared against Gurobi solver; see **Competitors** in Sec. 5.1, and columns `ILP-CPU` and `QUBO-CPU` in Tables 1, 2, 3, and 4. Being an established optimization software that guarantees global optimality, the Gurobi-based solutions provide a reference for the quality. Other the other hand, the energy consumption figures in Tables 2 and 4 show that the SNN algorithms, particularly `SF-HMVC` consume at least an order of magnitude less energy than Gurobi.
Thanks for the suggestion to compare against D-Wave. While both neuromorphic computers and quantum annealers are Ising solvers and thus can solve QUBO, the fact is that solving HMVC on D-Wave will still require adding slack variables to convert HMVC to QUBO. The additional variables will consume the qubit budget, thus reducing the size of the HMVC instances that can be solved using D-Wave. In contrast, the neuromorphic paradigm provides more fiexibility to handcraft SNNs, which we exploited to develop SF-HMVC, which is a slack-free formulation. Nonetheless, we agree that comparing against D-Wave is interesting, which we will leave as future work.
3. Thank you for pointing out the typos! If accepted, we will carefully check the paper and remove typos.
**How scalable is the proposed SNN method with future advancements in neuromorphic hardware?**
We have added an analysis of the scalability of SF-HMVC versus the QUBO-based SNN:
Let $n_{neurons}$ be the number of spiking neurons needed by the SNN, and $N$, $K$ and $r$ respectively be the number of vertices, number of hyperedges, and hyperedge degree/size of the input hypergraph ($r > 2$ for HMVC; see Problem 2). We have:
$n_{neurons} \text{(QUBO)} = N + (r-1)K$
$n_{neurons} \text{(SF-HMVC)} = N + K$
Fig. 1 illustrates the SNN construction. Fundamentally, treating $N$, $K$ and $r$ as input (size) parameters, *SF-HMVC scales linearly* whereas the *QUBO-based SNN scales quadratically* with the HMVC problem size.
**Can the approach be generalized to other combinatorial optimization problems beyond HMVC?**
As surveyed in Sec. 2, the flexibility of the neuromorphic approach allows SNNs to be handcrafted for combinatorial problems. So far, this has included constraint satisfaction problems (in the context of Sudoku [8,29], graph coloring [17], graph hamiltonian [21]) and Boolean satisfiability [21,32].
Our work is the first to conduct combinatorial *optimization* using a handcrafted SNN (previous such works were QUBO-based that employed the generic SNN QUBO solver).
Based on the broader research and our work, there is great potential to generalize the handcrafted approach to other combinatorial optimization problems. Moreover, HMVC is a general problem with many related formulations (set cover, hitting set, transversal) and special cases (MVC, max independent set), hence, our method is applicable to a wide variety of problems.
**The neuromorphic hardware capacity is currently limited**
Note that while our experiments were conducted on a single-chip system, stackable multiple-chip systems have been developed [A]. We also highlight the recent unveiling of the world's largest neuromorphic system by Intel, which contains "1.15 billion neurons and 128 billion synapses" [1]. With significant commitment by the chipmaking industry on neuromorphic computing, there is great potential for the hardware limitation to be resolved in the near term, thus, it is important for the AI/ML community to focus on SNN algorithm development now.
**Lack of public HMVC benchmarks**
We would like to reiterate point 1 above, where the current datasets/results are already sufficient to convincingly illustrate the much better scalability and performance of the proposed SF-HMVC over the QUBO-based SNN.
[A] Mehonic et al. Roadmap to Neuromorphic Computing with Emerging Technologies. arXiv:2407.02353
---
Rebuttal Comment 1.1:
Comment: Thanks for authors' rebuttal!
Overall I believe this is a technically solid work and I'm keeping my original rating with increased confidence. | Summary: This paper presents a novel approach to solving the Hypergraph Minimum Vertex Cover (HMVC) problem using spiking neural networks (SNNs). The authors introduce a slack-free formulation (SF-HMVC) that directly translates the constraints of the HMVC problem into the dynamics of SNN neurons, specifically targeting implementation on neuromorphic hardware such as Intel's Loihi 2. The paper demonstrates that the proposed method can effectively solve HMVC problems and provides a comparative analysis with other optimization techniques, such as Integer Linear Programming (ILP) and Quadratic Unconstrained Binary Optimization (QUBO).
Strengths: 1. This paper is well-written.
2. The proposed SF-HMVC approach is designed to be scalable, handling larger problem instances effectively. The parallel nature of neuromorphic computing allows for efficient processing of complex optimization problems.
3. The results show that SF-HMVC can occasionally outperform QUBO-CPU methods in solution quality for larger problem instances, indicating that the SNN-based approach can be competitive with traditional optimization algorithms.
Weaknesses: 1. Algorithm 1 is not the algorithm proposed in the article, yet it occupies a significant portion of the paper. It is recommended to move it to the appendix.
2. Could the authors provide a more detailed explanation of the setup and motivation for the W matrix, such as how the A matrix and F matrix are configured?
3. The experiments are influenced by many hyperparameters, such as $\lambda$ and timestep $T$. It is recommended to conduct appropriate ablation experiments.
Technical Quality: 2
Clarity: 3
Questions for Authors: Please see weakness.
Confidence: 3
Soundness: 2
Presentation: 3
Contribution: 3
Limitations: The authors raise some limitaions, for example, the capacity of the Loihi 2 is low, which is limited by the development of hardware.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for the feedback.
1. We included Algorithm 1 in the paper to make it self-contained, however, if accepted, we will move it to the appendix in the camera-ready version.
2. The motivation of the $\mathbf{W}$ matrix is to capture the connection strengths between neurons within the SF-HMVC SNN, which can be interpreted as interactions between NEBM-NEBM and NEBM-FB neurons.
- The binary variables in Eq. 6 are encoded as NEBM neurons, which are designed to inhibit each other to facilitate the minimization of the objective function. The connection strengths between NEBM-NEBM neurons are associated with $\mathbf{F}$ matrix, where the entries $f_{ij}$ are calculated based on the occurrence and co-occurrence of variables $z_i$, $z_j$ within the problem constraints.
- The $\mathbf{A}$ matrix defines the connection between NEBM versus FB neurons. The motivation of the $A$ matrix is to introduce an FB neuron to each set of NEBM neurons that belong to the same constraints. The FB neuron is active only when all NEBM neurons under its "observation" are turned off (corresponding to the constraint being violated) and remains silent otherwise. Once activated, the FB neuron sends excitatory signals until the constraint is satisfied.
We hope the above clarifies the setup and motivation for $\mathbf{W}$ matrix.
3. Note that the baseline QUBO-based SNN (Algorithm 1) requires 3 hyperparameters:
- $\lambda$ for the weight of the penalty term.
- $T$ for the temperature.
- $r_i$ for the refractory period.
The proposed SNN SF-HMVC (Algorithm 2) requires only 2 hyperparameters ($T$ and $r_i$) due to our slack-free formulation. Moreover, since the total number of steps $M$ given to both algorithms is fixed to $1000$, for brevity we do not perform ablation on $M$.
Figure R3 in the PDF under **Author Rebuttal** shows the ablation studies of the influence of $T$ and $r_i$ on the solution quality of SF-HMVC on Loihi 2. The results show that there are hyperparameter configurations where our method consistently yielded high-quality solutions.
Also, as mentioned in L194, we have conducted grid search to find common settings of hyperparameters ($\lambda$ for the QUBO formulations, temperature and refractory period for the SNN models) that work well for all problem instances. If accepted, we will define the chosen hyperparameter values in the paper.
---
Rebuttal Comment 1.1:
Title: Response to the authors
Comment: Dear authors,
Thanks for your rebuttal. I think the authors addressed my questions and I appreciate the supplemented experiments. So I increase my confidence score. | Summary: The paper presents a novel approach to solving the Hypergraph Minimum Vertex Cover (HMVC) problem using Spiking Neural Networks (SNNs) on neuromorphic hardware, which is a significant contribution to the field of combinatorial optimization in neuromorphic computing. Here's a detailed review based on various aspects of the paper:
Strengths: * The integration of spiking neural networks (SNNs) with quantum-inspired optimization techniques represents a novel approach to solving hard minimum vertex cover (HMVC) problems. This hybrid method leverages the strengths of both neuromorphic computing and quantum mechanics principles, potentially opening new avenues for complex problem-solving.
* The authors provide a clear explanation of the neuromorphic computing background, the limitations of existing SNN approaches, and the rationale behind their novel method. The use of NEBM spiking neurons and the detailed description of the network architecture and dynamics add depth to the technical discussion.
Weaknesses: * The comparison is mainly limited to traditional SNN-based QUBO solvers. Including comparisons with other contemporary optimization techniques, especially those that are non-neuromorphic, could provide a clearer benchmarking against the state-of-the-art in broader combinatorial optimization research.
* While the paper claims improved energy efficiency, detailed metrics or comparative energy consumption data are lacking. Providing explicit energy consumption figures or a more detailed analysis could help substantiate these claims and compare them with other methods' energy profiles.
* Although the paper discusses the potential scalability of the approach, there is limited empirical evidence supporting this claim, especially in terms of larger and more complex problem instances. Detailed scalability analysis, possibly through theoretical modeling or additional experiments, would be beneficial.
* The effectiveness of the proposed method is closely tied to the availability and performance of specific neuromorphic hardware. This dependency could limit its accessibility and practicality, especially in environments where such hardware is not readily available or is cost-prohibitive.
Technical Quality: 2
Clarity: 3
Questions for Authors: * Are there any more dataset available for benchmarking?
* What measures have been taken to prevent overfitting in the model during the experimental phase?
* Are there any scalability concerns when applying this method to larger or more complex problem sets?
Confidence: 1
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: limitations are discussed in the paper
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for the feedback.
- Note that we have compared against two contemporary (non-neuromorphic) optimization techniques: integer linear programming (ILP) and quadratic unconstrained binary optimization (QUBO), both implemented using a leading optimization sofware (Gurobi) and executed on an Intel Core i7 CPU. See columns `ILP-CPU` and `QUBO-CPU` in Tables 1 to 4.
- Note that we have included explicit energy consumption figures in Tables 2 and 4 for all methods. The energy measurements were obtained using the built-in profiler in the Loihi framework for the SNN solutions and pyJoules [5] for the CPU solutions.
To summarize, the SNNs on neuromorphic hardware (`SF-HMVC-Loihi` and `QUBO-Loihi`) consumed significantly less energy (at least 1 order of magnitude less, and often several orders of magnitudes less) than the CPU-based methods (`ILP-CPU` and `QUBO-CPU`).
Also, from Table 3, our method `SF-HMVC-Loihi` could return good solutions for all HMVC instances, whereas `QUBO-Loihi` was infeasible in a majority of the instances.
- The reviewer makes a good point, thanks! Since the capacity of the current neurmorphic hardware precludes testing on larger and more complex HMVC instances, detailed scalability analysis is useful.
We conduct this analysis by modeling the number of spiking neurons $n_{neurons}$ needed by either the QUBO-based SNN or the proposed SF-HMVC. Let $N$, $K$ and $r$ respectively be the number of vertices, number of hyperedges, and hyperedge degree/size of the input hypergraph ($r > 2$ for HMVC; see Problem 2 in the paper). We have:
$n_{neurons} \text{(QUBO)} = N + (r-1)K$
$n_{neurons} \text{(SF-HMVC)} = N + K$
Fig. 1 illustrates the SNN construction. Since $N$, $K$ and $r$ collectively define the size of the input, we can conclude that *SF-HMVC scales linearly* whereas the *QUBO-based SNN scales quadratically* with the problem size. Observe the HMVC results in Tables 3 and 4 where QUBO-SNN failed to be embedded in Loihi 2 due to exceeding the resource limits, whereas SF-HMVC could still find high-quality solutions. If accepted, we will add the above analysis.
- Note that the SNN formalism [31] is independent of neuromorphic hardware implementation, hence, SNN algorithm design and comparative analysis can be abstracted from the hardware, e.g., see above on $n_{neurons}$.
While empirical validation is tied to the hardware, we note that practical neuromorphic hardware has become very accessible in recent years. For example, academic researchers can sign up at no cost to the Intel Neuromorphic Research Community to obtain free cloud-based access to Loihi 2. Smaller firms are also offerring mature neuromophic hardware at affordable prices, e.g., see BrainChip's Akida™ PCIe Board.
**Are there any more dataset available for benchmarking?**
There are other datasets for benchmarking (e.g., BHOSLIB for MVC). However, the selected problem instances were already sufficient to convincingly illustrate the benefit for the proposed approach. Particularly for HMVC, the proposed SNN (SF-HMVC) could provide good results on Loihi 2, while the baseline QUBO-based SNN either returned infeasible results or could not be executed on the hardware due to exceeding resource limits. See Tables 3 and 4.
While testing on larger instances will require larger-scale hardware, note that Intel recently unveiled the world's largest neuromorphic system, containing "1.15 billion neurons and 128 billion synapses" [1]. Thus, it is important for the AI/ML community to focus on SNN algorithm development now.
**What measures have been taken to prevent overfitting in the model during the experimental phase?**
Note that our SNN directly optimizes solutions for the combinatorial problem without involving separate training and inference phases. An interpretation of preventing "overfitting" in our case can be finding hyperparameter settings that generally work for all instances.
As mentioned in L213, for each method, we conducted grid search for hyperparameters and selected a single configuration that demonstrated consistent performance across all problem instances. If accepted, we will further elaborate this point.
**Are there any scalability concerns when applying this method to larger or more complex problem sets?**
In the analysis above, we have shown that our SNN (SF-HMVC) scales linearly while the baseline QUBO-based SNN scales quadratically, hence, our method is provably more scalable. The fact that SF-HMVC does not need to optimize slack variables will also allow it to return higher quality results due to simpler loss landscapes.
The current main limitation is due to the hardware, which, as highlighted above, is on the way to be resolved judging by recent major developments (see above). | Summary: The paper presents a method that solves a specific type of problem of combinatorial optimization (hypergraph minimum vertex cover) through spiking neural networks. The method, tested on small versions of the problem, enables a neuromorphic hardware system made by Intel to arrive at a result in cases where previous methods did not, and with less energy consumption than certain other methods if they are ran on CPU.
Strengths: The work advances the field of neural networks beyond machine learning, i.e. in combinatorial optimization.
Moreover, it concerns spiking neural networks, a field that has been attracting growing interest.
Furthermore, the method is actually tested in neuromorphic hardware, contrary to many papers in the field that only include theoretical implications for neuromorphic hardware.
Moreover, the work presents new partial evidence that neuromorphic algorithms and neuromorphic hardware may have advantages over more conventional approaches, a promise of this research field that has been long looking for fulfillment.
Weaknesses: On the other hand, the paper has several weaknesses.
1. The paper does not make clear how significant the type of problem addressed here (HMVC) is, and why it is significant.
2. A figure illustrating an example toy problem of HMVC as well as its solution in the QUBO-based SNN and in the newly proposed method would be very helfpul in clarifying the paper's contribution.
3. The literature review around neuromorphic hardware is rather narrow, and largely focuses on Loihi alone. For example, even narrowly focusing on hardware for Ising models, here is a review of various implementations that could be cited: https://www.nature.com/articles/s42254-022-00440-8
4. Section 2 cites works where SNNs have performed well, but only does this for tasks outside of machine learning. For machine learning there is only a pointer to a survey, which is related to Loihi again, and again focuses largely beyond machine learning. This should be mitigated, especially because in reality, machine learning is arguably the more popular application of SNNs, and spiking machine learning models have in fact outperformed non-spiking ones concretely and under fair hardware conditions in certain cases. Here are the two examples that I am aware of: https://arxiv.org/abs/2009.06808 (under certain temporal dynamics SNNs were shown to be theoretically optimal and practically surpassed ANNs in accuracy) and https://openreview.net/forum?id=iMH1e5k7n3LI (spikes improved inference speed without accuracy drops, and even on GPUs).
5. Most importantly, I believe that the paper's contribution to the broader field of Neural Networks might not be significant, for the following reasons.
- 5a. The work is specifically related to SNNs alone, and specifically related to their use for combinatorial optimization and even more narrowly, specifically HMVC. That is a rather niche scenario.
- 5b. The results are on rather small scale demonstrations.
- 5c. It is unclear that there is any advantage from the neuromorphic aspect. Specifically, the presented heuristic-based algorithm (or a suitable adaptation) has not been tested on CPU. It seems that in the same way that previous SNN approaches were comparable to QUBO and could thus be run on CPU, there must be an analog of the new method that can also be tested on CPU, and might be more energy efficient than Loihi. After all, QUBO on CPU is more efficient than QUBO on Loihi, as the paper shows. Similarly, is QUBO on CPU the baseline to beat to claim a neuromorphic advantage, or should it be eg a microcontroller or an FPGA?
6. The authors mention that they could not change the random seed to obtain statistics. For a stochastic algorithm like the one presented, this seems rather important. Could this be mitigated, eg by running the algorithm on CPU and obtaining some statistics there?
Technical Quality: 3
Clarity: 2
Questions for Authors: Could the authors address the above points?
Is there an intuitive explanation why QUBO on Loihi is less efficient than on CPU, and does this explanation not apply to the authors' new approach?
Why can't the random seed be changed?
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: Some of the limitations mentioned in this review are mentioned in the paper, but not all.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for the feedback.
1. We have indicated in L79 the practical applications of HMVC in "computational biology [9], computer network security [19], resource allocation [7] and social network analysis [23]." More fundamentally, HMVC is a general problem with many related formulations (set cover, hitting set, transversal, MVC, max independent set) [A], hence, HMVC algorithms have wide applicability. In short, HMVC is a significant problem and a good SNN algorithm for it will have major impact. If accepted, we will expand on the above.
2. Fig. 1 illustrates a toy problem and the previous QUBO-based SNN and our handcrafted SNN (SF-HMVC). If accepted, we will update the figure with solutions from both methods.
3. Note that our focus is on *SNN algorithms for optimization* (L39). In L29 we have touched upon IBM TrueNorth [25] and Intel Loihi [12,28] which are the two major neuromorphic hardware that have supported the development and experimentation of SNN-based optimization [6, 10, 11, 13, 24, 26, 28, 30, 32].
We agree that highlighting other potential implementations is useful. If accepted, we will cite the Nature Reviews Physics paper suggested by the reviewer, which covers diverse technologies at various stages of maturity such as spintronics, memristors, quantum annealers, etc.
4. Since our focus is on SNN for optimization, Sec. 2 mainly surveyed works related to that. Nevertheless, we agree that the section could include major works on SNN for machine learning. If accepted, we will include the references.
5. Our focus on SNN for optimization is closely aligned with the Primary Area of **Optimization (e.g., convex and non-convex, stochastic, robust)** in NeurIPS 2024.
- 5a. HMVC is a fundamental problem with wide applicability---we have justified this in point 1 above.
Second, as the reviewer said, SNN "... has been attracting growing interest". Combinatorial optimization is a major strand of research in SNN [8,11,13,17,21,22,26,32]. Moreover, the Nature Reviews Physics paper cited by the reviewer surveys numerous hardware implementations to *solve combinatorial optimization*---clearly this indicates the importance of research into novel hardware and algorithms for combinatorial optimization.
- 5b. Note that we have comprehensively evaluated the algorithm on the DIMACS benchmark [20] for MVC and synthetic instances for HMVC. As stated in L206, only instances that fit on Loihi 2 could be tested. Nonetheless, small HMVC instances (Tables 3 and 4) were already sufficient to illustrate the superiority of our `SF-HMVC-Loihi` over the previous `QUBO-Loihi`.
While the current capacity of Loihi 2 limits the problem size (which also affected [10, 13, 24, 28, 32]), our careful formulations and rigorous benchmarking provide a clear indication of the potential of our approach. Indeed, the reviewer counted "actual testing in neuromorphic hardware" as a strength.
See also our responses to K44F and 9cvo on scalability.
- 5c. The reviewer suggested the existence of a CPU analog of our new method that "might be more energy efficient than Loihi", but did not provide details of this algorithm.
Note that the previous SNN is QUBO-based and hence has a direct CPU analog. In contrast, our method is a *handcrafted* SNN for HMVC that is not QUBO-based. It is unclear what the CPU analog of our method is.
Second, the reviewer claimed that our paper showed that "QUBO on CPU is more efficient than QUBO on Loihi" (based on the context, we presume this meant *energy efficiency*). Note that all our results (Tables 2 and 4) point to a much higher energy consumption by the CPU solutions (`QUBO-CPU` and `ILP-CPU`) than the SNN solutions (`QUBO-Loihi` and `SF-HMVC-Loihi`) ==> The reviewer probably misread the results.
We believe CPU is the correct baseline since it is the currently dominant hardware for combinatorial optimization. Comparisons with microcontrollers and FPGAs are also interesting, which we will conduct as future research; thanks for the suggestion!
6. To obtain statistics of results, we simulated the SNNs on CPU via the Lava Software Framework; see `QUBO-Lava` and `SF-HMVC-Lava` in the PDF under **Author Rebuttal**. Note that Lava simulates asynchronous processing on CPU and hence the performance (particulary the energy consumption) does not closely reflect the performance on neuromorphic hardware.
It is also important to note that the Lava versions are not the *intrinsic* CPU analogs of the SNN methods.
**Why QUBO on Loihi is less efficient than on CPU?**
Again, we believe the reviewer has mistaken, since our results (Tables 2 and 4) point to a much higher energy consumption by `QUBO-CPU` than `QUBO-Loihi`.
Our method `SF-HMVC-Loihi` also consumed much less energy than `QUBO-CPU` and `ILP-CPU`.
**Why can't the random seed be changed?**
The current Intel API that was available to us did not include functionality to change the random seed on Loihi 2. However, note the additional results on Lava simulation mentioned above.
**Summary**
The reviewer did not report technical flaws. The main concerns seem to be on the relevance and significance of the contribution, which we have adequately addressed. We hope our clarifications on the experiments and results further demonstrate the significance of our findings.
[A] Wikipedia: Vertex cover in hypergraphs
---
Rebuttal 2:
Comment: I would like to thank the authors for their response.
It is helpful. However, some important parts remain partly unclear.
My understanding that QUBO on Loihi is less efficient than on CPU was based on the fact that except for its smallest versions, the problem did not fit on Loihi.
The Loihi 2 board that was used, as far as I understand, has 128 cores, whereas the compared Intel Core i7-11700K CPU has only 8. Also, both chips have about the same number of transistors. This seems to be a type of inefficiency on the Loihi side.
Based on this context, I will rephrase and expand on some of my previous questions that I found to be only partly addressed by the authors.
- What causes this hardware inefficiency? In other words, what is Loihi's hardware bottleneck for larger problems, be it with QUBO or with SF-HMVC, and is it a fundamental weakness of neuromorphic computing in general, or rather of the specific implementation that is Loihi?
- Is there a fundamental reason why SF-HMVC cannot have an analogue that would work on CPU? Essentially, is there a neuromorphic principle that is exploited for this approach but cannot be used with CPUs?
- I suspect that a hypothetical SF-HMVC-CPU implementation would not only be more efficient than QUBO (CPU or Loihi), but would also allow for larger problems with fewer hardware resources than SF-HMVC-Loihi. This suspicion is based on the comparison between QUBO-CPU and QUBO-Loihi. Is this suspicion wrong, and, if so, why?
---
Rebuttal 3:
Comment: As outlined in the abstract, our **claimed contribution** is a scalable handcrafted SNN called SF-HMVC for solving HMVC on neuromorphic computers. The claim was justified via scalability analysis and experiments on Loihi 2 that compared SF-HMVC against the previous QUBO-based SNN. The results on Loihi 2 also showed that SF-HMVC was more *energy-efficient* than the CPU solutions.
It seems Que6's Official Comments were mainly focused on *hardware efficiency*; based on the comments, this can roughly be understood as the "amount" of hardware resources required per "unit problem size".
It is debatable if basic metrics such as number of cores and transistors are meaningful for predicting the relative hardware efficiencies of processors with fundamentally different architectures (von Neumann versus neuromorphic) and characteristics [31], e.g., CPU cores focus on computation while neuromorphic cores integrate computation and memory.
In any case, the veracity of our **claimed contribution** does not strongly depend on the hardware efficiency of Loihi 2 relative to other neuromorphic implementations or CPU. Specifically,
- Due to our slack-free formulation, SF-HMVC will provably consume less hardware resources than QUBO-SNN on a neuromorphic computer, be it Loihi or other neuromorphic implementations that have higher hardware efficiency than Loihi.
- Even if neuromorphic computing is fundamentally less hardware-efficient than the CPU, our results present clear evidence of the superior *energy efficiency* of SF-HMVC compared to the CPU solutions, which is a major benefit for applications where energy efficiency is paramount.
Responding to Que6's questions:
- An SNN allocates one neuron to handle a specific variable in the input problem. Thus, the maximum problem size that an SNN can solve is limited by the number of neurons that a neuromorphic computer can support. The limit on HMVC size that is solvable on Loihi 2 is thus due to limitations in the hardware implementation, and not due to a fundamental weakness of neuromorphic computing.
- SF-HMVC is an SNN that we designed to operate on Loihi 2. We have added results that show that the simulation of SF-HMVC on CPU consumed several orders of magnitude more energy than SF-HMVC on Loihi 2. This at least shows that SF-HMVC could benefit from intrinsic neuromorphic processing in a way that its direct CPU simulation could not.
- Note that there are CPU algorithms for solving HMVC that does not require QUBO reformulation and/or additional slack variables. The baseline method ILP-CPU which solves HMVC as integer linear programming using the Gurobi software is one such method (see Sec. 5.1 **Competitors**). ILP-CPU was capable of solving all the HMVC instances in our experiments, thus, arguably ILP-CPU is at least as "hardware-efficient" on the CPU as SF-HMVC is on Loihi 2. However, our results in Tables 3 and 4 show that ILP-CPU was *less energy-efficient* than SF-HMVC-Loihi.
It is possible that there exists a CPU analog of SF-HMVC that is more hardware-efficient on CPU than SF-HMVC is on Loihi 2, however, without providing sufficient details (e.g., pseudocode), it is not possible to prove or disprove the reviewer's claim/suspicion. More importantly, it is unknown if the CPU analog will be more *energy-efficient* than SF-HMVC-Loihi.
---
Rebuttal Comment 3.1:
Comment: I would like to thank the authors for their responses. My concerns and lack of clarity have been largely addressed, and I found the paper interesting and potentially impactful. I am raising my score. | Rebuttal 1:
Rebuttal: We thank the AC for handling our submission and the reviewers for their insightful comments.
K44F, cGCZ and 9cvo thought that the paper was technically solid. Que6 also did not report any technical flaws.
Que6's concerns were mainly on relevance and significance: note that our focus on spiking neural networks (SNN) for optimization is closely aligned with the Primary Area of "Optimization (convex and non-convex, discrete, stochastic, robust)" in NeurIPS 2024. Moreover, hypergraph minimum vertex cover (HMVC) is a fundamental combinatorial problem with many related formulations and practical applications. Thus, an effective SNN algorithm for HMVC will have major impact.
Also, we believe Que6 misread the results; the data in Tables 2 and 4 clearly show that the SNNs executed on Loihi (`QUBO-SNN` and `SF-HMVC-SNN`) consistently consumed much less energy than the CPU solutions (`ILP-CPU` and `QUBO-CPU`).
Based on a comment by K44F and 9cvo, we have added scalability analysis of the proposed SF-HMVC. Briefly, SF-HMVC scales linearly whereas the baseline QUBO-based SNN scales quadratically with the HMVC problem size. The analysis was empirically validated in the experiments, where `QUBO-SNN` was infeasible on Loihi 2 for the HMVC instances, whereas `SF-HMVC-SNN` could solve the same instances on Loihi 2 effectively (see Tables 3 and 4). In future iterations of neuromorphic hardware, `SF-HMVC-SNN` will retain its computational advantages over `QUBO-SNN`, due to our slack-free formulation for HMVC and handcrafted SNN design (Sec. 4).
More broadly, recent developments on neuromorphic technology (e.g., [1]) indicate the near-term availability of large-scale neuromorphic processors and/or clusters, hence, the current capacity limitation on Loihi 2 should not deter the NeurIPS community from researching SNN algorithms.
We also added the following results in the attached PDF:
- [Tables R1 and R2] Following a suggestion by Que6, to circumvent the inability to change the random seed on Loihi 2 and obtain statistics on the solution quality, we simulated the SNNs on the CPU via the Lava framework that allows changing the random seed (see `QUBO-Lava` and `SF-HMVC-Lava` in the uploaded PDF). Note that `QUBO-Lava` and `SF-HMVC-Lava` are not *instrinsic* CPU analogs of the SNN algorithms, hence, their performance (solution quality, energy consumption, runtime) are not reflective of what is achievable on the neuromorphic hardware.
- [Figure R3] Following a suggestion by cGCZ, we conducted ablation studies for the hyperparameters of `SF-HMVC-Loihi` on MVC and HMVC instances. The results show that there are hyperparameter configurations where our method consistently yielded high-quality solutions; see our response to cGCZ for more details of this experiment.
For more details of the above, please see our individual responses to the reviewers.
**Summary**
The reviewers found the paper technically solid and/or did not indicate technical flaws. We have addressed the main concerns on relevance, significance, scalability and hardware limitation, as well as obtained statistics on solution quality via SNN simulation on CPU. We thank the AC and reviewers again for their time.
Pdf: /pdf/592f9e31a720bb2f890540894917608fe6a30733.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
AHA: Human-Assisted Out-of-Distribution Generalization and Detection | Accept (poster) | Summary: The paper presents a novel approach to the problem of OOD generalization and detection. The authors introduce the AHA (Adaptive Human-Assisted OOD learning) framework, which aims to enhance both out-of-distribution (OOD) generalization and detection by strategically leveraging human-assisted labeling within a maximum disambiguation region. The paper reports significant improvements over state-of-the-art methods with only a few hundred human annotations, demonstrating the efficacy of the proposed framework.
Strengths: - The AHA framework is a creative solution that addresses the challenges of OOD generalization and detection, which are critical for real-world applications of machine learning models.
- The authors provide extensive experimental results that validate the effectiveness of their approach, showing robust performance across various datasets.
- The incorporation of human feedback in a strategic manner is a strength, as it capitalizes on the limited labeling budget to maximize model performance.
- The paper's contributions are articulated, with the novel labeling strategy and the integration of human assistance being the highlights.
- The transformation of the problem into a noisy binary search is an intelligent methodological choice that allows for the efficient identification of the maximum ambiguity threshold.
Weaknesses: - While the paper demonstrates strong results, it is not clear how the AHA framework scales with larger and more complex datasets.
- The reliance on human annotations could be a limitation in scenarios where such resources are not readily available or are cost-prohibitive.
- The paper could benefit from a discussion on how the findings generalize beyond the tested datasets and scenarios.
- The computational complexity of the AHA algorithm and its runtime performance on large datasets are not discussed.
- The paper could address potential biases introduced by human labeling, especially in the context of OOD detection.
Technical Quality: 3
Clarity: 3
Questions for Authors: - How does the AHA framework perform as the size and complexity of the dataset increase?
- What are the specific steps taken to mitigate potential biases in human labeling?
- Can the authors provide more details on the computational efficiency of the AHA algorithm, especially for large-scale applications?
- How does the framework handle a class imbalance in the context of OOD detection?
- Are there any specific domains or applications where the AHA framework is expected to be more or less effective, and why?
- Could the proposed method benefit the OOD detection with unreliable sources [R1] and inspire unsupervised OOD detection [R2]?
----
[R1] Out-of-distribution detection learning with unreliable out-of-distribution sources. NeurIPS 2023.
[R2] Out-of-distribution detection with an adaptive likelihood ratio on informative hierarchical vae. NeurIPS 2022.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: NA.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank you for the detailed comments and questions, which we address in detail below.
> *W1. While the paper demonstrates strong results, it is not clear how the AHA framework scales with larger and more complex datasets.*
We tested the AHA framework on the larger and more complex ImageNet benchmark. We use ImageNet-100 as the ID data, ImageNet-100-C with Gaussian noise as the covariate OOD data, and high-resolution natural images from iNaturalist for the semantic OOD data. Our AHA method still achieves much better performance in OOD detection in terms of FPR95 and AUROC compared to WOODS and SCONE.
Results for both OOD generalization and OOD detection are summarized below:
| Method | OOD Accuracy | ID Accuracy | FPR95 | AUROC |
| -------- | -------- | -------- |-------- |-------- |
| WOODS | $44.46$ | $86.49$ | $10.50$ | $98.22$ |
| SCONE | $65.34$ | $87.64$ | $27.13$ | $95.66$ |
| AHA (Ours) | **72.74** | $86.02$ | **2.55** | **99.35** |
> *W2. The reliance on human annotations could be a limitation in scenarios where such resources are not readily available or are cost-prohibitive.*
This is a valid point. There may indeed be special scenarios where resources are not available or are cost-prohibitive. We will include this in the limitations discussion section. However, our proposed AHA framework is label-efficient, requiring only a small portion of labels (1\% $\sim$ 2\%) to be effective, which significantly reduces the cost associated with human annotation requirements.
> *W3. The paper could benefit from a discussion on how the findings generalize beyond the tested datasets and scenarios.*
We thank you for the suggestion. The findings may not generalize to special scenarios where there is no clear boundary between covariate-shift and semantic-shift data, and these two types of OOD samples can be very hard to separate. In such cases, the proposed AHA method might downgrade and become comparable to random sampling. However, in most real-world scenarios, there are usually discernible differences between covariate and semantic shifts.
> *W4. The computational complexity of the AHA algorithm and its runtime performance on large datasets are not discussed.*
The computational complexity of the AHA algorithm is $O(Nk)$, where $N$ denotes the number of examples and $k$ denotes the budget size. The average runtime for the core part of the noisy binary search process in the AHA algorithm is 0.3232 milliseconds per point. For a large dataset, for example when the dataset size is 60k, the total runtime performance for the noisy binary search step is around 19 seconds. This step is inexpensive compared to neural network training.
For the last step, training of the multi-class classifier and OOD detector with selected and annotated examples based on the loss objective in Section 4.4 usually takes several hours to converge.
> *W5. The paper could address potential biases introduced by human labeling, especially in the context of OOD detection.*
Here are the specific steps taken to mitigate potential biases in human labeling:
We propose using diverse labelers and providing thorough training for them on recognizing and avoiding biases. Additionally, we will implement validation procedures to ensure label quality, such as having a subset of data labeled by multiple people and utilizing expert reviewers. These steps could help to reduce the potential biases introduced by human labeling.
Moreover, our framework is label-efficient and only requires 1\% $\sim$ 2\% annotations for the wild data. This makes it practical for manual double verification to address potential biases and ensure label quality, even in cost-prohibitive scenarios.
> *Q4. How does the framework handle a class imbalance in the context of OOD detection?*
Our framework uses metrics such as AUROC, which are less sensitive to class imbalance and provide a more accurate assessment of performance even in imbalanced scenarios.
> *Q5. Are there any specific domains or applications where the AHA framework is expected to be more or less effective, and why?*
Referring to our response to W3, the AHA framework is expected to be less effective in rare scenarios where there is no clear boundary between covariate OOD and semantic OOD data, as these two types of OOD samples can be very hard to separate. We will add this discussion to the main paper.
> *Q6. Could the proposed method benefit the OOD detection with unreliable sources [R1] and inspire unsupervised OOD detection [R2]?*
This could be part of our future work to extend the AHA framework by considering unreliable sources and unsupervised OOD detection. We will cite and incorporate these two literature in the future work discussion section.
---
Rebuttal Comment 1.1:
Comment: Thanks for your feedback. It addresses my concerns, especially experiments on large-scale datasets. I have decided to increase the rating on soundness and my confidence. The need for human assistance is both the motivation of this work and its inevitable weakness, depending on the task requirements and the specific scenario. Overall, I am positive about this work (weak acceptance). However, I also would like to hear other reviewers' opinions and discuss this, and make further judgments.
---
Reply to Comment 1.1.1:
Comment: We thank you for reading our response and your positive feedback. | Summary: This paper proposes to address both out-of-distribution detection and generalization within one joint framework under human-assistance. The proposed method first utilizes a noisy binary search algorithm to identify the most informative samples to be labeled. Then, it continues to annotate these samples with human feedback. The authors conduct experiments on CIFAR and PACS to evaluate the proposed method.
Strengths: - The proposed method can handle OOD detection and generalization at the same time, which is impactful to both these two individual research areas.
- Covariat-shifts and semantic shifts are both inevitable in real-world applications. Thus the proposed method is straightforward and well-motivated.
- Most parts of this paper are well presented with good visualization.
Weaknesses: - AHA may only work under a rather strict assumption. Compared to outlier exposure, AHA needs to access the real test data distribution $S_{wild}$ to selectively label some samples. While outlier exposure does not need such an assumption. Many previous OOD detection methods can only access training data distribution and an auxiliary OOD data distribution (noted that such auxiliary OOD has no overlapped samples with test-time OOD data in common settings). Thus I think AHA may only work under a more strict assumption (i.e., the test data distribution $S_{wild}$ is accessible) than previous outlier exposure.
- I generally believe enhancing OOD detection and generalization with human feedback is laborious. I am aware the proposed method can get good performance with hundreds or thousands of labeled samples in many cases. However, such human feedback still seems laborious to me. For example, in CIFAR experiments, the images are only 32*32. Thus I think it would take a lot of time to label such samples. Not to mention the samples can be noised or corrupted (CIFAR-10-C).
- The experiments are not adequate. It is widely acknowledged that OOD detection and generalization are more difficult on large-scale high-resolution datasets. For example, ImageNet benchmarks. The authors do not conduct evaluations on such datasets.
- In some cases, there may be no clear boundary between covariate-shift data and semantic shift data. As mentioned in recent work[1], these two types of OOD samples can be very hard to separate (even for humans). Could the authors comment on this phenomenon?
Given the above points, I tend to reject this paper because the overall quality do not meet the expectations of NIPS. However, I may adjust my score if there is a strong argument.
[1] Yang, William, Byron Zhang, and Olga Russakovsky. "ImageNet-OOD: Deciphering Modern Out-of-Distribution Detection Algorithms."
Technical Quality: 3
Clarity: 3
Questions for Authors: Please refer to weaknesses and address my concerns.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank you for your thorough comments and questions, which we address in detail below.
> *W1. AHA may only work under a rather strict assumption.*
We would like to clarify that the unlabeled wild data distribution $S_{\text{wild}}$ is totally different from the test distribution. Our driving motivation is to exploit unlabeled wild data with various distribution shifts naturally arising in real-world scenarios. Thus, we consider the following generalized characterization of the wild data to model the realistic environment:
$P_\text{wild}:= (1-\pi_s-\pi_c) P_\text{in} + \pi_c P_\text{out}^\text{covariate} +\pi_s P_\text{out}^\text{semantic}$
Moreover, outlier exposure has a more strict assumption of careful data cleaning to ensure the auxiliary outlier data does not overlap with the ID data. Compared to outlier exposure, we relax this assumption by leveraging unlabeled "in-the-wild" data, which is a mixture of ID, covariate OOD, and semantic OOD data commonly observed in real-world applications.
> *W2. I generally believe enhancing OOD detection and generalization with human feedback is laborious.*
Our method significantly reduces the labeling effort compared to traditional approaches. With no more than 2\% annotations of the wild data, our method outperforms existing state-of-the-art methods by reducing OOD detection error by 15.79\% and increasing OOD generalization accuracy by 5.05\%.
Regarding the CIFAR experiments, while the images are small (32x32), our method only requires binary "In" vs. "Out" labels for the selected semantic OOD data, and category labels for the covariate OOD data. This simple classification task is much faster than providing detailed labels or descriptions, significantly reducing the time and effort needed from human annotators.
We also note that labeling a few hundred OOD samples is generally much cheaper than labeling in-distribution examples. For reference, labeling 200 images with five labelers each costs less than $10 on Amazon Mechanical Turk. We believe the trade-off between this minimal labeling effort and the substantial performance improvement, especially in challenging OOD scenarios, makes our approach particularly efficient and practical for real-world applications.
> *W3. The experiments are not adequate. It is widely acknowledged that OOD detection and generalization are more difficult on large-scale high-resolution datasets.*
The experiments on large-scale, high-resolution datasets are included in **Appendix G**. Following the ImageNet benchmark for joint OOD generalization and OOD detection as used in SCONE literature [1], we use ImageNet-100 as the in-distribution data, with labels details provided in **Appendix E**. For the covariate OOD data, we use ImageNet-100-C with Gaussian noise in the experiment. For the semantic OOD data, we use the high-resolution natural images from iNaturalist.
Results for both OOD generalization and OOD detection evaluation are summarized below:
| Method | OOD Accuracy | ID Accuracy |FPR95 |AUROC |
| -------- | -------- | -------- |-------- |-------- |
| WOODS | $44.46$ | $86.49$ | $10.50$ | $98.22$ |
| SCONE | $65.34$ | $87.64$ | $27.13$ | $95.66$ |
| AHA (Ours) | **72.74** | $86.02$ | **2.55** | **99.35** |
These experiments demonstrate that our method maintains its effectiveness on more complex datasets.
[1] Feed Two Birds With One Scone: Exploiting Wild Data for Both Out-of-Distribution Generalization and Detection. ICML 2023.
> *W4. In some cases, there may be no clear boundary between covariate-shift data and semantic shift data.*
We acknowledge the valid point raised about the potential difficulty in distinguishing between covariate-shift and semantic-shift data in some cases. However, in such scenarios, our proposed AHA method would naturally default to unbiased sampling, performing no worse than a random sampling strategy.
Moreover, it's important to note that such cases are relatively rare in practice. Most real-world scenarios do exhibit discernible differences between covariate and semantic shifts, as verified empirically in **Section 5.4** of our main paper.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their detailed explanation in the rebuttal. My concern about performance on high-resolution large-scale benchmarks has been addressed (w3). The argument for w2 is also OK for me. I recommend adding these to the paper to enhance the transparency about the cost of additional labeling. However, I still have further concerns about the basic settings of this paper. AHA assumes that there is an overlap between test-time covariant-shifted data distribution and training-time unlabeled data. For instance, it assumes that during training, the model can access samples corrupted by Gaussian noise. It is wired to me and may be a rather strict condition compared to that used in standard OOD generalization methods. What if we keep the test-time covariant-shifted distribution unknown? For example, only expose the model to one certain type of corruption, but test it on another type of corruption.
---
Rebuttal 2:
Title: Response to Reviewer Rqpf (Followup)
Comment: Thank you for taking the time to read our response. We address your comments below.
> *1. I still have further concerns about the basic settings of this paper.*
We would like to clarify that the wild data setting, which includes a mixture of ID, covariate OOD, and semantic OOD data, is commonly observed in practice. The overall wild mixture data distribution $P_\text{wild}$ differs from the test environment data distribution. Additionally, the mixing ratios of $π_s$ and $π_c$ are unknown in our formulation, making this setting well-suited to real-world scenarios with varied distributions of wild data. We have also empirically tested different mixing ratios, as detailed in **Appendix H**, where we demonstrate the robust and strong performance of our AHA framework.
Moreover, as suggested, we have summarized the OOD accuracy results on covariate test data when the model is exposed to one type of corruption but tested on another. Specifically, we exposed the model to wild data containing Gaussian noise corruption and then tested it on nine other types of corruption: impulse noise, spatter, shot noise, saturate, speckle noise, frosted glass blur, motion blur, frost, and zoom blur. The results indicate AHA displays strong performance, even when the test-time covariate-shifted distribution remains unknown.
| Algorithm | Impulse noise | Spatter | Shot noise | Saturate | Speckle noise | Frosted glass blur | Motion blur | Frost | Zoom blur | Average |
| --------- | :-------------: | :----------------: | :----------: | :----------: | :-------------: | :---------: | :------------------: | :---------------: | :----------: | :----------: |
| ERM | $85.15$ | $92.88$ | $84.62$ | $92.49$ | $84.46$ | $52.29$ | $89.10$ | $89.67$ | $85.55$ | $84.02$ |
| AHA (ours)| **87.25** | **93.03** | **90.90** | **92.72** | **90.77** | **63.85** | **89.35** | **91.52** | **86.58** | **87.33** |
> *2. Regarding the argument for W2.*
Thank you for your constructive comments. We will incorporate discussions about the cost of additional labeling in the paper as suggested.
---
Rebuttal Comment 2.1:
Comment: Thank you again for your constructive comments. If we've successfully resolved your previous concerns, we would appreciate an improved score. If you have any further comments, please feel free to let us know. We're happy to discuss and address any concerns.
---
Rebuttal 3:
Comment: Thanks for your response and hard work during the rebuttal. I am pleased to acknowledge that this paper has no significant flaws and the additional experiments are well-appreciated. However, I agree with other reviewers that the need for human assistance is both an advantage and an inevitable weakness of AHA. The uncommon settings should also be carefully explained and compared with classic domain adaption or OOD generalization. It seems to be a rather strict assumption than those used in domain adaption or OOD generalization (accessing samples drawn from test-time covariant-shift distribution and labeling some of them). Removing such an assumption can greatly strengthen the proposed method (e.g., involving various types of corruption). Theoretical support of why unlabeled data can help both OOD detection and generalization in AHA can be also taken into consideration in future work. With a better understanding of this paper, I have adjusted my score accordingly.
---
Rebuttal Comment 3.1:
Comment: Thank you for your insightful feedback and constructive comments, which have been invaluable in enhancing our manuscript. We will incorporate the additional results (including various types of corruptions) and discussions in the paper as suggested. Regarding the need for human assistance, there are many applications where human annotation is particularly useful, such as medical diagnostics. In these contexts, human assistance provides crucial expertise and contextual information, and should be considered a significant advantage rather than a limitation. | Summary: This paper introduces a novel, integrated approach AHA (Adaptive Human-Assisted OOD learning) to simultaneously address both OOD generalization and detection through a human-assisted framework by labeling data in the wild. Extensive experiments validate the efficacy of AHA.
Strengths: 1. this paper is well written and easy to follow
2. good visualization and extensive experiments
3. Maximum Disambiguation Region and reduction to noisy binary search are new to me
Weaknesses: 1. there exsits a strong assumption that the weighted densities of semantic and covariance ood should equalize
2. what is difference between active learning and the proposed Human-Assisted Learning
3. what is the time used for noisy binary search
4. whether the lamda searched on the one dataset can be transfer to another dataset
Technical Quality: 3
Clarity: 3
Questions for Authors: see weakness
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: see weakness
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank you for your positive feedback and comments. We address each comment below in detail.
> *W1. There exists a strong assumption that the weighted densities of semantic and covariance ood should equalize*
Thank you for pointing out this potential misunderstanding. We would like to clarify that our formulation does not require the strong assumption that the weighted densities of semantic and covariate OOD should be equal. This holds for both our maximum disambiguation region formulation (Section 4.2) and the AHA algorithm (Section 4.3).
Specifically, our maximum ambiguity threshold formulation in Eq. (1):
$\lambda_* = argmax_{\mu \in R} \int_0^\mu ((1-\pi_c -\pi_s)p_{\text{in}}(\nu) + \pi_c p_{\text{covariate}}(\nu)) - \pi_s p_{\text{semantic}}(\nu) d\nu$
indicates that if the density of covariate OOD is much smaller than semantic OOD, we would choose the smallest OOD scoring value. Conversely, if the density of covariate OOD is much larger than semantic OOD, we would choose the largest OOD scoring value, where we try to sample as many semantic OOD examples as possible.
> *W2. What is difference between active learning and the proposed Human-Assisted Learning*
Human involvement in labeling data points has been studied under various terminologies, including human-in-the-loop learning, bandits, interactive learning, and active learning. The title "Human-Assisted Learning" stems from our finding that human assistance can dramatically improve OOD generalization and detection performance.
We provide discussions on active learning in **Appendix C**. To offer more context here, classic active learning involves an iterative training process, while our proposed algorithm only requires a single model fine-tuning. Additionally, existing deep active learning works do not study OOD robustness and the challenges posed by realistic scenarios involving wild data. Our proposed AHA method is specifically tailored for both OOD generalization and detection challenges.
> *W3. What is the time used for noisy binary search*
The average time for the noisy binary search process is 0.3232 milliseconds per point on Tesla V100 GPU.
> *W4. Whether the lamda searched on the one dataset can be transfer to another dataset*
The lambda searched on one dataset is not transferable to another dataset due to variations in data distributions. While lambda itself can't be transferred, the process of searching for the optimal lambda for each dataset is efficient and not computationally expensive. For example, the processing time for a dataset with 60k samples is about 19 seconds when using the Tesla V100 GPU.
---
Rebuttal Comment 1.1:
Title: Official Comment by Reviewer Eoxk
Comment: Thank the authors for answering my questions. I would like to keep my rating "Borderline accept".
---
Reply to Comment 1.1.1:
Comment: Thank you again for your positive evaluation and feedback. If we have successfully addressed your questions, we would greatly appreciate an improved score. Otherwise, we are more than willing to provide additional discussions to address any further concerns. | null | null | null | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
ReEvo: Large Language Models as Hyper-Heuristics with Reflective Evolution | Accept (poster) | Summary: This paper studies how to search for the best heuristics for combinatorial optimization problems (COPs) with large language models (LLM). The key idea is to use genetic programming (GP) to dynamically update the heuristics with LLM. Short-term reflection and long-term reflection are also incorporated in the GP algorithm as the guidance. The proposed methods are demonstrated on five different COPs and compared against other heuristic solvers and neural solvers.
Strengths: 1. Applying LLM to generate heuristics for COPs is novel.
2. The experiements are extensive and in detail.
3. The paper is well-written and the idea is easy to understand, except that the details of the GP algorithm are a bit vague.
Weaknesses: 1. The proposed ReEvo seems to rely on existing heuristic or neural solvers. Can ReEvo discover new heuristics independently? Incorporating LLM to generate heuristics for COPs is interesting, but it seems trivial if LLM can only be used as a post-processing to improve the existing methods.
2. The experiments are not conducted with state-of-the-art (SOTA) methods. For example, on TSP problem, can ReEvo improve the LKH-3 [1] solver? Can ReEvo improve the non-autoregressive neural solvers such as DIFUSCO [2], T2TCO [3]?
3. The writing in Sec. 4 needs improvement. For example, how the short-term and long-term reflection is utilized to improve heuristic design? how to guarantee the generated code is executable?
[1] Helsgaun, Keld. "An extension of the Lin-Kernighan-Helsgaun TSP solver for constrained traveling salesman and vehicle routing problems." Roskilde: Roskilde University 12 (2017): 966-980.
[2] Sun, Zhiqing, and Yiming Yang. "Difusco: Graph-based diffusion solvers for combinatorial optimization." Advances in Neural Information Processing Systems 36 (2023): 3706-3731.
[3] Li, Yang, et al. "From distribution learning in training to gradient search in testing for combinatorial optimization." Advances in Neural Information Processing Systems 36 (2024).
Technical Quality: 3
Clarity: 3
Questions for Authors: I would appreciate the authors' response on my main concerns listed above.
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: I have no concerns regarding the impact of the manuscript.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We deeply appreciate your time and effort in reviewing our work, and the insightful questions and suggestions you raised. We respond to your comments below.
> W1: Reliance on existing heuristic or neural solvers.
ReEvo can discover new heuristics independently but integration currently leads to better performance. In Section 5.4, we show that ReEvo can independently discover new constructive heuristics for TSP, which outperform the algorithm designed by traditional hyper-heuristics.
ReEvo improves the key algorithmic components of existing methods, such as the perturbation in GLS, the crossover and mutation in GA for EDA, the heuristics in ACO, and the attention in NCO. We leave it to future work to further explore its potential to independently discover new complicated heuristics. Under an agentic framework, by first designing a skeleton of the program and then each individual function, we may be able to generate high-performing long programs using LLMs.
> W2: Experiments with SOTA methods, such as T2TCO, DIFUSCO, and LKH-3.
LHH is complementary to your suggested SOTA methods.
- LKH-3 is a specialized solver for the TSP, which has been heavily optimized for the VRP over decades. LHHs, on the other hand, are more general and can be applied to any COP. They are especially suitable for cases where expert knowledge is lacking or the problem is treated as a black box.
- DIFUSCO and T2TCO require optimal solutions for training, which may not be available for many problems. LHHs, on the other hand, only require heuristic evaluations. We compare against T2TCO on TSP500 and TSP1000, each with 128 test instances. Below we report the optimality gap and running time for solving all instances. They indicate the competitive performance of GLS-ReEvo.
Method | TSP500 (gap; time) | TSP1000 (gap; time)
--- | --- | ---
T2TCO | 0.37% (16m) | 0.78% (55m)
GLS (16 CPU cores) | 0.39% (12m) | 0.80 (2.6h)
GLS-ReEvo (16 CPU cores) | 0.25% (12m) | 0.72% (2.6h)
- LHH and general ML4CO [1] are complementary, each with its unique strengths and limitations. Some advantages of LHHs over ML4CO are:
- LHHs generate interpretable heuristics (code snippets), while ML4CO generates black-box parameterized policies. Interpretable heuristics offer insights for human designers and can be more reliable in practice when faced with dynamic environments, limited data, distribution shifts, or adversarial attacks.
- LHHs generate heuristics that are more efficient in terms of computational resources, as they do not require GPU during deployment.
- LHHs require only less than 100 heuristic evaluations and about 5 minutes to evolve a strong heuristic, while many ML4CO methods require millions of samples and days of training. When the solution evaluation is expensive, LHHs are more practical.
- LHHs only need some text-based (and even black-box) explanations to guide the search. ML4CO requires the development of NN architectures, hyperparameters, and training strategies, where informed inductive biases and manual tuning are crucial to guarantee performance.
[1] https://github.com/Thinklab-SJTU/awesome-ml4co
> W3-1: The writing in Sec. 4 needs improvement. For example, how the short-term and long-term reflection is utilized to improve heuristic design?
We will improve the writing in Section 4 according to your suggestion.
An example of both short- and long-term reflection is illustrated in Figure 1(b). The process prompts LLM to reflect upon the relative performance of two heuristics (short-term), and accumulate such knowledge over iterations (long-term). The inputs and outputs of the reflection process are all texts.
**Short-term reflection.**
- Inputs: Two heuristics to crossover and indicators of their relative performance.
- Outputs: Text-based reflections upon the performance of the two heuristics, i.e. why the better heuristic is better, and how the worse heuristic can be improved. It is analogous to a verbal gradient derived from the performance comparison.
- How it is utilized: ReEvo incorporates the short-term reflections into the crossover prompt, leading to more informed offspring.
**Long-term reflection.**
- Inputs: prior long-term reflections and short-term reflections of the current iteration.
- Outputs: Text-based reflections upon the overall performance of the population, and strategic hints for better heuristic design.
- How it is utilized: ReEvo incorporates the long-term reflections into the mutation prompt, leading to more strategic exploration.
> W3-2: How to guarantee the generated code is executable?
If the generated code is not executable, it is discarded.
---
Rebuttal 2:
Title: Request for Feedback
Comment: As the author-reviewer discussion will end soon (< 24 hours from now), we would greatly appreciate it if you could take a moment to review our response. Please let us know if you have any further questions or concerns.
---
Rebuttal Comment 2.1:
Comment: Thank you for the rebuttal. While I have major concerns regarding your model's contributions to the CO community, you indeed conduct extensive experiments to demonstrate your model. So I will maintain my score, which is leaning acceptance.
---
Reply to Comment 2.1.1:
Title: Thanks for reviewing
Comment: Thank you again for reviewing and for your constructive feedback. | Summary: The paper presents a LLM-enhanced evolutionary algorithm to solve diverse combinatorial optimization problems. The method is evaludated on various COPs and heuristics, however, the advantage of proposed method compared to recent ML4CO methods that does not use LLM requires more specification.
Strengths: 1. The paper proposed a LLM-enhanced hyper-heuristic model to solve various combinatorial optimization problems.
2. Extensive experiments show the efficacy of the proposed method on five different COPs and multiple heuristics.
Weaknesses: 1. Some details of model is not fully specified from the text, including the meaning, output of "short-term reflection" and "long-term reflection". It is unclear how the process is done and how this process completes the "reflection"?
2. Though the proposed method has demonstrated the advances compared with heuristics, on specific problems (take TSP as example), there might be some recent models undiscussed in this paper that does not use heuristic but still achieve competitive results, such as [1,2] that conducts TSP experiments on 1000 and 10000.
3. The paper does not specify the costs of using LLM APIs, which could be real challenges when readers want to follow and reproduce the results in this paper.
[1] Qiu R, Sun Z, Yang Y. Dimes: A differentiable meta solver for combinatorial optimization problems[J]. Advances in Neural Information Processing Systems, 2022, 35: 25531-25546.
[2] Li Y, Guo J, Wang R, et al. From distribution learning in training to gradient search in testing for combinatorial optimization[J]. Advances in Neural Information Processing Systems, 2024, 36
Technical Quality: 2
Clarity: 3
Questions for Authors: 1. What result does figure 4(B) shows, the obejective value or the gap towards the optimal? Without specifying the evaluation metric, it is difficult to interpret the results.
2. How does the proposed method compared to recent ML4CO method on TSP [1,2], or probably other methods on VRP? From my experience, hyper-heuristic methods may not perform well as these methods on solution quality or solving time. What is the advantage of LLM-enhanced heuristics in these COPs?
Confidence: 3
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: The paper discuss some limitations, but the method may be limited by cost of LLM inference and problem scale.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We deeply appreciate your time and effort in reviewing our work, as well as the insightful comments and questions you raised.
> W1: Details of reflections.
We will revise the writing for better clarity according to your suggestions. An example of both short- and long-term reflection is illustrated in Figure 1(b). The process prompts LLM to reflect upon the relative performance of two heuristics (short-term), and accumulate such knowledge over iterations (long-term). The inputs and outputs of the reflection process are all texts.
**Short-term reflection.**
- Inputs: Two heuristics to crossover and indicators of their relative performance.
- Outputs: Text-based reflections upon the performance of the two heuristics, i.e. why the better heuristic is better, and how the worse heuristic can be improved. It is analogous to a verbal gradient derived from the performance comparison.
- How it is utilized: ReEvo incorporates the short-term reflections into the crossover prompt, leading to more informed offspring.
- Meaning: LLM reflects upon the performance of the two heuristics, providing insights for better heuristic design. It only targets the two heuristics for the current crossover, and does not consider the historical performance of the population.
**Long-term reflection.**
- Inputs: prior long-term reflections and short-term reflections of the current iteration.
- Outputs: Text-based reflections upon the overall performance of the population, and strategic hints for better heuristic design.
- How it is utilized: ReEvo incorporates the long-term reflections into the mutation prompt, leading to more strategic exploration.
- Meaning: LLM accumulates knowledge over iterations, combining past experiences with new insights to offer long-term strategic hints.
> W2: Undiscussed recent baselines.
We comprehensively compare against 25 baselines across 6 COPs and 5 algorithmic types, under both black-box and white-box settings, and in terms of both LHH and generated heuristics. 8 of these baselines were published after 2023. We show that LHH can enhance ML4CO solvers in Section 5.5, involving TSP/CVRP with 1000 nodes.
Here, we compare against the suggested SOTA baselines, on TSP500 and TSP1000 each with 128 test instances. Below we report the optimality gap and running time for solving all instances. They indicate the competitive performance of GLS-ReEvo. Please note that T2TCO requires optimal solutions for training, which may not be available for many problems, while ReEvo only requires heuristic evaluations.
Method | TSP500 (gap; time) | TSP1000 (gap; time)
--- | --- | ---
DIMES | 1.76% (2.2h) | 2.46% (4.6h)
T2TCO | 0.37% (16m) | 0.78% (55m)
GLS (16 CPU cores) | 0.39% (12m) | 0.80 (2.6h)
GLS-ReEvo (16 CPU cores) | 0.25% (12m) | 0.72% (2.6h)
We respectfully bring to your attention that LHH and ML4CO [1] are complementary, each with its unique strengths and limitations. Some advantages of LHHs over ML4CO are:
- LHHs generate interpretable heuristics (code snippets), while ML4CO usually generates black-box parameterized policies. Interpretable heuristics offer insights for human designers and can be more reliable in practice when faced with dynamic environments, limited data, distribution shifts, or adversarial attacks.
- LHHs generate heuristics that are more efficient in terms of computational resources, as they do not require GPU during deployment.
- LHHs require only less than 100 heuristic evaluations and about 5 minutes to evolve a strong heuristic, while many ML4CO methods require millions of samples and days of training. When the solution evaluation is expensive, LHHs are more practical.
- LHHs only need some text-based (and even black-box) explanations to guide the search. ML4CO requires the development of NN architectures, hyperparameters, and training strategies, where informed inductive biases and manual tuning are crucial to guarantee performance.
[1] https://github.com/Thinklab-SJTU/awesome-ml4co
> W3: The cost of using LLM APIs
As reported in Appendix B (line 858), each run costs about $0.06 when using GPT3.5 Turbo.
> Q1: The y-axis of Figure 4(B).
As indicated by the y-axis label, Figure 4(B) shows the objective value of the best generated heuristic, against the number of heuristic evaluations for an LHH.
> Q2-1: Comparisons with recent ML4CO method on TSP and VRP.
Please refer to our response to W2.
> Q2-2: Advantage of LLM-enhanced heuristics over ML4CO methods.
Please refer to our response to W2.
> Potential limitations regarding LLM inference cost and problem scale
Please also refer to our response to W2 and W3.
**Inference cost.** Please note that we don't need LLM inference once the heuristic is generated, and the generated heuristics are efficient in terms of computational resources. For LHH search, deploying local LLMs is also feasible, as verified in our work with Llama 3.
**Problem scale.** We have tested our method on large-scale problems, such as TSP1000 and CVRP1000. LHHs are currently most effective when integrated with existing algorithms. As long as the algorithm is scalable, its LHH-enhanced version is scalable as well.
---
Rebuttal Comment 1.1:
Comment: Thanks for the author's detailed response. I agree with the point on the distinctive advantage of LHH compared with ML4CO, where LHH could benefit CO problem-solving. I would raise my score to 5.
---
Reply to Comment 1.1.1:
Title: Thanks for reviewing
Comment: Thank you again for reviewing and for your constructive feedback. | Summary: This article proposes a large language model (LLM) assisted evolutionary computation (EC)-based method, to solve combinatorial optimization problems. It incorporates a reflection mechanism to enhance performance in black-box settings.
Strengths: 1. The method shows an impressive performance on several CO problems with a black-box setting.
2. The article is well-written.
Weaknesses: 1. The reflection technique has been well-developed and widely used in prompt engineering and code generation [1,2].
2. Evolution methods based on LLM have been adopted in EOH [3] and Funsearch [4], limiting the contribution to the framework.
[1] Reflexion: Language agents with verbal reinforcement learning. Advances in Neural Information Processing Systems 36 (2024).
[2] Promptbreeder: Self-referential self-improvement via prompt evolution. arxiv preprint arxiv:2309.16797 (2023).
[3] Evolution of Heuristics: Towards Efficient Automatic Algorithm Design Using Large Language Model. Forty-first International Conference on Machine Learning. 2024.
[4] Mathematical discoveries from program search with large language models. Nature 625.7995 (2024): 468-475.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. As an EC-based method, what is the primary distinction or improvement compared to EoH, FunSearch, and other similar methods [1,2,3]? The reflection technique was developed early and is widely used in prompt engineering and code generation(Shown in Weakness1). To what extent do you think designing more complex reflection and prompt engineering strategies contributes to optimization [4]?
2. In the ablation experiments, Please include ReEvo based on more open-source LLMs (such as DeepSeek and Gemini).
3. Testing on the online bin packing problem is recommended [5].
4. In Table 1, the result of EoH is drawn from the literature, testing it with the released code would be beneficial.
5. ReEvo requires three LLM calls to complete a single iteration, while other compared EC+LLM methods (such as EoH) only need one LLM call per iteration. Considering the comparison in the degree of LLM calls would be beneficial.
[1] Connecting large language models with evolutionary algorithms yields powerful prompt optimizers. arxiv preprint arxiv:2309.08532 (2023).
[2] Large Language Model-Aided Evolutionary Search for Constrained Multiobjective Optimization. arxiv preprint arxiv:2405.05767 (2024).
[3] Large language models as evolutionary optimizers." arxiv preprint arxiv:2310.19046 (2023).
[4] Are Large Language Models Good Prompt Optimizers?. arxiv preprint arxiv:2402.02101 (2024).
[5] Mathematical discoveries from program search with large language models. Nature 625.7995 (2024): 468-475.
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 1
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We deeply appreciate your time and effort in reviewing our work, as well as the insightful comments and questions you raised.
> W1: The reflection technique has been well-developed and widely used in prompt engineering and code generation.
Thank you for raising this point. Our work introduces a novel integration of reflection with evolutionary search. As noted by Reviewer v9WQ, this integration is effectively a novel technique to load in-context knowledge without incurring memory blowups.
We respectfully draw your attention to the fact that the reflection techniques, which were initially proposed at NeurIPS 2023 [1, 2], are still under active development. Adapting reflection for a downstream application itself constitutes a valuable contribution; recent examples include its application to translation tasks [3, 4].
> W2: Evolution methods based on LLM have been adopted in EoH and Funsearch, limiting the contribution to the framework.
Thank you for highlighting the work done in EoH and FunSearch, which we extensively cite and acknowledge in our paper. These groundbreaking studies indeed lay the foundation for our work. We draw significant inspiration from them and aim to extend their contributions in meaningful ways.
We believe that the combination of LLMs with EA for CO can be explored through three primary lenses: (1) the search algorithm, (2) the downstream CO applications (the search space), and (3) the evaluation methodologies (ways to evaluate how well we search). We believe our work contributes to all three aspects:
- **Search Algorithm**: We introduce the Reflective Evolution method, demonstrating its superior sample efficiency.
- **Applications**: Our work broadens the scope by applying this paradigm to five heterogeneous algorithmic types and six different combinatorial optimization problems, advancing the state of the art in GLS, EDA, ACO, and NCO methods.
- **Evaluation Methodologies**: We employ fitness landscape analysis to explore the underlying mechanisms of our proposed method; we establish black-box experimental settings to ensure reliable comparisons and practical relevance to real-world applications.
We believe that building upon and extending the foundations laid by EoH and FunSearch, as we have done, constitutes solid contributions.
> Q1-1: The primary distinction or improvement over prior methods.
Please refer to our response to W1 and W2.
> Q1-2: The contributions of designing reflections and prompt engineering.
We verify its contributions in Section 6. They are important for better sample efficiency (e.g. using less than 100 evaluations for designing strong heuristics). Sample efficiency is crucial for real-world applications where heuristic evaluation could be expensive.
Reflection is more effective when using capable LLMs, such as GPT-3.5 Turbo and its successors, as discussed in prior works [1]. Allowing a large number of heuristic evaluations, or implementing weak open-source LLMs, could obscure the impact of reflection or other prompting techniques, as reported in [5].
> Q2: Implement ReEvo based on more open-source LLMs for ablation.
According to Shinn et al. [1], "self-reflection relies on the power of the LLM’s self-evaluation capabilities and not having a formal guarantee for success." Currently, many open-source LLMs are not capable enough to guarantee the statistically significant improvement of reflections. However, as LLM capabilities improve, we only expect this paradigm to get better over time [1].
> Q3: Testing on the online bin packing problem.
Below we present the comparative results on online BPP (weibull 5k), which are averaged over 3 runs.
| EoH iteration | 0 | 1 | 2 | 3 |
| ----------------- | ------ | ------ | ------ | ------ |
| total evaluations | | | | 135 |
| best obj. (mean) | 4.2844 | 4.2844 | 4.2744 | 4.2511 |
| best obj. (std) | 0.0000 | 0.0000 | 0.0141 | 0.0286 |
| best obj. (min) | 4.2844 | 4.2844 | 4.2545 | 4.2145 |
| ReEvo iteration | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| total evaluations | 30 | 40 | 45 | 55 | 60 | 70 | 75 | 85 | 90 | 100 | 105 |
| best obj. (mean) | 4.2844 | 4.2844 | 4.2844 | 4.2844 | 4.2844 | 3.1026 | 3.0560 | 3.0560 | 3.0560 | 1.6379 | 1.6379 |
| best obj. (std) | 0.0000 | 0.0000 | 0.0000 | 0.0000 | 0.0000 | 1.6713 | 1.6394 | 1.6394 | 1.6394 | 1.1943 | 1.1943 |
| best obj. (min) | 4.2844 | 4.2844 | 4.2844 | 4.2844 | 4.2844 | 0.7390 | 0.7390 | 0.7390 | 0.7390 | 0.7390 | 0.7390 |
> Q4: Testing EoH with the released code
We will test EoH with its released code and update the results.
> Q5: Comparisons based on LLM calls.
ReEvo doubles LLM calls when performing crossover. However, we believe LLM inference can be easily sped up by scaling up hardware, and is rapidly technically evolving on both algorithmic and hardware fronts. In contrast, evaluating the generated heuristics can be costly in real-world applications. Therefore, we believe that comparisons based on heuristic evaluations are more practical and meaningful.
**References**:
[1] Shinn et al., Reflexion: language agents with verbal reinforcement learning, NeurIPS 2023
[2] Madaan et al., Self-Refine: Iterative Refinement with Self-Feedback, NeurIPS 2023
[3] Wang et al., TASTE: Teaching Large Language Models to Translate through Self-Reflection, ACL 2024
[4] Chen et al., DUAL-REFLECT: Enhancing Large Language Models for Reflective Translation through Dual Learning Feedback Mechanisms, ACL 2024
[5] Zhang et al., Understanding the Importance of Evolutionary Search in Automated Heuristic Design with Large Language Models, 2024
---
Rebuttal Comment 1.1:
Comment: Thank you for your explanations. I still have a few concerns that I would appreciate further clarification on.
> W1 and W2 follow up:
Firstly, while it is mentioned that LLM-based methods are less likely to cause memory blowups (such as compared to learning-based methods or complex heuristics like LKH3), this does not seem to highlight the specific contribution of integrating the self-reflection technique. Additionally, although I understand that reflection can enhance performance, it appears to me that the paradigm you proposed is more of a combination of the reflection technique and Evolutionary Computation (EC). This essentially allows the LLM to generate dynamic prompts to guide heuristic generation, as opposed to manually designed prompts used in the methods you compared.
Moreover, in your experiments, did you control parameters such as the population size or LLM calls to prove that your proposed paradigm is overall superior to the state-of-the-art methods?
> Q2 follow up:
Could you clarify whether the performance of your proposed method varies significantly when applied to different LLMs? If so, should this be considered a major challenge for researchers attempting to follow your work, given that your method seems to rely heavily on the choice of LLM?
> Q5 follow up:
I still have some concerns about the logic presented. You mentioned that LLM inference can be accelerated through hardware scaling and that there are rapid advancements in both algorithmic and hardware technologies. However, currently, would it be fair to assume that comparisons based on LLM call counts could still lead to inconsistencies in terms of cost and performance?
---
Reply to Comment 1.1.1:
Title: Further clarifications
Comment: We appreciate your response. Below we clarify your remaining concerns.
> LLM-based methods are less likely to cause memory blowups; mitigating memory blowups is not the contribution of EA-reflection integration.
It seems there may be a misunderstanding regarding the concept of memory blowups in this context. Here, memory blowups do not refer to GPU memory usage (though that is also a concern). Instead, **it refers to the LLM agent's memory**. An LLM agentic architecture involves modules for (1) profile, (2) memory, (3) planning, and (4) action [1]. During its interactions with the environment (in our context, designing heuristics and receiving their fitness values as feedback), the agent's memory module stores the agent's experiences [1]. Although we want to load as much experience as possible into the LLM's context during inference, the context length is limited; the LLM's capability to process in-context knowledge/examples is also limited. This is the memory blowup we are referring to.
In ReEvo, short-term reflections interpret the environmental feedback in each round of interaction. Long-term reflections distill the accumulated experiences and knowledge so that they can be loaded into context during inference without causing memory blowups. We hope this clarifies the contribution of EA-reflection integration in mitigating memory blowups.
[1] A Survey on Large Language Model-based Autonomous Agents, 2024
> Automated dynamic prompting for heuristic generation.
Leveraging reflections to dynamically guide heuristic generation with accumulated knowledge is the core contribution of ReEvo. We hope ReEvo can inspire future research in combining EC with LLMs to eliminate manual prompt design and enhance the EoH performance with LLMs-derived prompts.
> Control parameters for comparisons
Due to the combinational nature of the parameter space, we believe it's reasonable to use the original parameters of the compared methods.
> Does your method's performance vary significantly with different LLMs, posing a major challenge for researchers replicating it?
LLM reasoning techniques generally show performance variations across different LLMs [1]. It is not challenging for researchers to follow our work, as it is evident that stronger LLMs are better at leveraging reflections or general reasoning techniques [1]. We recommend using advanced LLMs (e.g., GPT-3.5-turbo and its successors) to maximize the utility of reflection and other LLM techniques that require reasoning capabilities [1]. In the future, we only expect LLMs to become even more powerful and more capable of leveraging reflections [2].
[1] Reasoning with Language Model Prompting: A Survey, 2024
[2] Reflexion: Language agents with verbal reinforcement learning, 2023
> Would it be fair to assume that comparisons based on LLM call counts could still lead to inconsistencies in terms of cost and performance?
When a large number of heuristic evaluations are permitted, the performance of different EAs within the EoH/LHH context becomes nearly indistinguishable [1]. Therefore, we believe the primary goal of designing better EAs is to **address scenarios where heuristic evaluations are costly**, which is common in many industrial applications. To this end, we propose ReEvo and adopt an experimental setting that limits heuristic evaluations.
LLM inference today, whether through local models or commercial APIs, **is already highly cost-effective**—both in terms of expense (e.g., about $0.0003 per call in ReEvo using GPT-3.5-turbo) and time (e.g., less than one second on average with asynchronous API calls or batched inference). These costs are negligible compared to real-world heuristic evaluations, which, even on simplified academic benchmarks, can take over 20 minutes per evaluation in EDA problems.
**LLM reasoning techniques** (e.g., Chain of Thought or CoT) offer another perspective for fair comparisons. Although variants like Tree of Thought (ToT), Graph of Thought (GoT), and MCTS-based methods require more LLM calls, performance is typically assessed by the final evaluation outcomes, such as k-shot solution assessments, similar to k heuristic evaluations in the EoH/LHH context.
If one has to compare EoH/LHHs based on LLM inference costs, it is essential to consider token usage, as tokens determine both the cost and time, not the number of LLM calls. Requiring additional heuristic descriptions, as in EoH, increases these costs, which ReEvo avoids.
In summary, comparing LLMs based on the number of LLM calls is less reasonable or meaningful.
[1] Understanding the Importance of Evolutionary Search in Automated Heuristic Design with Large Language Models, 2024 | Summary: The paper's contributions consist of multiple sections, which could be orthogonal:
1. The paper proposes a new ReEvo algorithm within the class of string-mutation evolutionary search methods. The core idea is to add a "Reflection LLM" which observes patterns over the history of trials, and proposes a new generation instruction for the regular Generator LLM.
2. This algorithm is applied on the area of combinatorial optimization, to mutate the code specifying a search heuristic.
Experiments are conducted to:
1. Show better heuristic search over a variety problems and situations like Travelling Salesman (TSP), Ant Colony Optimization, Electronic Design Automation, and Neural Combinatorial Optimization
2. Ablate the importance of using the "Reflection LLM" especially over longer terms, and outperformance over other string-mutation methods like EoH.
Strengths: * This paper is very well written and structured. It demonstrates a strong understanding of the current literature and the nuanced differences against previous works, and their strengths/weaknesses.
* The paper supports itself well with numerous experiments among multiple dimensions (e.g. varying combinatorial optimization applications, ablation studies on its own components, comparisons against other string-mutation baselines).
* Overall, the paper does not have any glaring weaknesses and should be a solid accept.
Weaknesses: * The current writing style is a bit too strong on the evolutionary side (e.g. targeted towards an audience like GECCO), which makes parts of the paper unmotivated to readers who are not deep into the evolutionary literature. It becomes too easy to view the paper as a incremental improvement in the class of evolutionary string-mutation methods, which in itself is a good contribution, but not groundbreaking given previous works (PromptBreeder, FunSearch, and other works that the paper itself has cited).
* There are possibly better ways to phrase the paper to make the method more natural. For example, I believe the Reflection LLM with its short-term + long-term variants, is effectively a technique to load in-context samples without incurring memory-blowups.
* It is unclear whether ReEvo can be applied to any other string-based optimization scenarios (e.g. Prompt optimization, generic code search). Currently the Reflection LLM is very targeted towards summarizing combinatorial heuristics. If so, this would be nice to touch upon, or if not, this should be listed in the limitations section.
* Following up on the above comment, this could imply that ReEvo is over-engineered at the moment; is there a more general and simpler variant of it, applicable to any string problems?
Technical Quality: 3
Clarity: 4
Questions for Authors: Please address my weaknesses above.
Confidence: 4
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: Yes, in Section 7 (Conclusion).
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We deeply appreciate your time and effort in reviewing our work, the insightful questions and suggestions you raised, and your recognition of our contributions. We respond to your comments below.
> W1: The paper's evolutionary focus may be limiting its appeal and perceived novelty to a broader audience, and rephrasing it to emphasize the method's natural approach could be beneficial.
Thank you for providing a fresh perspective on the technical presentation. We will adopt your suggestion to generalize the description of the reflection technique. As you suggest, this approach loads in-context knowledge without causing memory blowups. We believe this could broaden its appeal to a wider audience.
> W2: The paper should clarify ReEvo's applicability to other string-based optimization scenarios and consider discussing a more generalized version of the method.
Thank you for this valuable suggestion. ReEvo is generally applicable to other string-based optimization scenarios as long as the reflecting over the relative performance of strings is meaningful. We exemplify its generality by applying it to prompt learning. We evolve prompts for formal logic reasoning task from MMLU, using random search, vanilla GA, and ReEvo, respectively. The results below present the classification accuracy against the number of prompt evaluations, where ReEvo demonstrates superior sample efficiency.
\# prompt evaluations | 4 | 8 | 12 | 16 | 20
--- | --- | --- | --- | --- | ---
Random Search | 10 $\pm$ 0 % | 10 $\pm$ 0 % | 10 $\pm$ 0 % | 10 $\pm$ 0 % | 10 $\pm$ 0 %
Vanilla GA | 10 $\pm$ 0 % | 22 $\pm$ 8 %| 25 $\pm$ 12 %| 37 $\pm$ 8 %| 38 $\pm$ 9%
ReEvo | 10 $\pm$ 0 % | 37 $\pm$ 5 %| 43 $\pm$ 2 %| 45 $\pm$ 0 %| 45 $\pm$ 0%
---
Rebuttal Comment 1.1:
Title: A different writing strategy may have been better.
Comment: Thanks for the clarification. As I mentioned, because the focus of the paper is on combinatorial heuristics, this is causing the other reviewers to require multiple comparisons to SOTA combinatorial optimizers, which I'm not sure is even worth the time to address.
I think the strategy of phrasing the paper's contributions in terms of the general algorithm (rather than specific combinatorial application) may have been better.
I retain my score, but acknowledge the above issue may lead to a possible rejection of the paper.
---
Reply to Comment 1.1.1:
Title: Thank you for reviewing
Comment: Thank you for reviewing. We will follow your suggestions to rephrase the paper's contributions.
Regarding the comparison with SOTA heuristic solvers (e.g. LKH), LHHs are general and versatile. They are especially suitable for cases where expert knowledge is lacking or the problem is treated as a black box.
Regarding the comparison with SOTA ML4CO optimizers, LHHs demonstrate unique strengths:
- LHHs generate interpretable heuristics (code snippets), while ML4CO usually generates black-box parameterized policies. Interpretable heuristics offer insights for human designers and can be more reliable in practice when faced with dynamic environments, limited data, distribution shifts, or adversarial attacks.
- LHHs generate heuristics that are more efficient in terms of computational resources, as they do not require GPU during deployment.
- LHHs require only less than 100 heuristic evaluations and about 5 minutes to evolve a strong heuristic, while many ML4CO methods require millions of samples and days of training. When the solution evaluation is expensive, LHHs are more practical.
- LHHs only need some text-based (and even black-box) explanations to guide the search. ML4CO requires the development of NN architectures, hyperparameters, and training strategies, where informed inductive biases and manual tuning are crucial to guarantee performance. | null | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
E2E-MFD: Towards End-to-End Synchronous Multimodal Fusion Detection | Accept (oral) | Summary: This paper introduces a novel end-to-end algorithm named E2E-MFD for multimodal image fusion and object detection. Unlike existing joint learning methods, its key innovation lies in the synchronous joint optimization approach, simplifying the fusion detection process into a single training step and enhancing efficiency compared to traditional multi-step methods. To harmonize the losses between the image fusion and object detection networks, a Gradient Matrix Task-Alignment method is proposed. This method balances the gradients of shared parameters between the image fusion and object detection tasks, addressing the challenges of task dominance and conflicting gradients in multi-task learning. Additionally, an image fusion network with an Object-Region-Pixel Phylogenetic Tree is designed to perceive information at different granularity levels. Experimental results demonstrate the performance of the proposed method in both image fusion and object detection.
Strengths: - The idea of learning image fusion and object detection tasks simultaneously to mutually benefit each other is intriguing and reasonable.
- An end-to-end fusion detection algorithm is proposed, effectively avoiding the local optimum problem encountered in multi-stage training models. Specific modules such as Gradient Matrix Task-Alignment and Object-Region-Pixel Phylogenetic Tree are introduced to achieve this goal.
- Sufficient Experiments demonstrate that these modules facilitate the learning process, and jointly optimizing these two tasks outperforms other existing pipelines.
- The authors clearly describe their methods in the paper and enhance its comprehensibility through the judicious use of formulas and figures.
Weaknesses: The overall idea is pretty interesting and reasonable. I can see the insight in the proposed method. However, there are some typos in paper:
1.The L_SSIM in the line 179 seems not same with the Equation (7).
2.I see a dashed arrow in the figure 1, what’s this mean?
Technical Quality: 4
Clarity: 4
Questions for Authors: 1.In "Study of branches in the Object-Region-Pixel Phylogenetic Tree", the authors analyzed the reasons for performance degradation under settings 0, 1, 2, 3, and 4, and provided visual evidence. Could you elaborate on why the fusion performance only starts to decline after adding the fourth setting?
2.For Figure 4, the targets are only circled. Enlarging the highlighted areas, similar to the other figures, would make them clearer.
Confidence: 5
Soundness: 4
Presentation: 4
Contribution: 4
Limitations: Expanding new modal datasets or implementing modality conversion between multimodal data will become a solution to the single issue raised in the paper about publicly available multimodal object detection datasets.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: # Reviewer cEzg
Thank you for your feedback.
**A1**: Regarding the typo in Equation (7) and the inconsistency with $\mathcal{L}_{\text {SSIM }}$ on line 179, we apologize for the oversight. We have revised line 179 to ensure consistency with the description in Equation (7).
**A2**: The dashed arrow in Figure 1 represents Dashed arrows indicating cutting off gradient flow. We have clarified this in the figure caption for better understanding.
**A3**: In the study of branches in the Object-Region-Pixel Phylogenetic Tree, we analyzed the performance degradation under settings 0, 1, 2, 3, and 4, and provided visual evidence. The fusion performance begins to decline after adding the fourth setting primarily due to the increased complexity and interaction among multiple branches. As more branches are added, the network may struggle to effectively balance and integrate pixel-level and object-level information, leading to a gradual decline in fusion performance.
**A4**: Regarding Figure 4, we acknowledge your suggestion to enlarge the highlighted areas where targets are circled. We have enhanced the clarity of these areas in Figure 4 to provide a clearer representation of the targets, similar to other figures in the manuscript.
---
Rebuttal Comment 1.1:
Comment: The rebuttal has addressed my concerns. Due to the novel motivation, clear methodology, and comprehensive experimental analysis, I will maintain my score. I suggest incorporating these modifications into the paper.
---
Reply to Comment 1.1.1:
Title: Official Comment by Authors
Comment: Thank you for your prompt comments and for affirming our rebuttal. As you suggested, we will incorporate these explanations and revisions into the paper to enhance clarity. | Summary: This paper focuses on the task of multimodal image fusion detection, combining texture details and target semantic information. An end-to-end multimodal fusion detection algorithm named E2E-MFD is proposed, which employs synchronous joint optimization, differing from existing independent or cascaded joint methods. The authors introduce a Gradient Matrix Task-Alignment method to help resolve gradient conflict issues in the image fusion and object detection tasks. Experiments on horizontal and oriented object detection datasets demonstrate the effectiveness of this method.
Strengths: 1. This paper presents the first attempt to achieve simultaneous single-stage training of image fusion and object detection, and the results appear very promising.
2. Inspired by multitask learning, the Gradient Matrix Task-Alignment method is introduced to reasonably balance the loss functions, thereby converging the fusion detection weights to optimal points.
3. The multi-granularity strategy in the Object-Region-Pixel Phylogenetic Tree demonstrates its effectiveness in learning shared parameters, thereby enhancing object detection performance.
4. The writing and figures in the paper are clear and easy to understand. Proper use of formulas enhances the comprehensibility of their method.
5. The experiments and ablation studies comprehensively demonstrate the results.
Weaknesses: The paper presents a thorough and well-executed series of experiments that significantly contribute to the strength and credibility of the research. However, I think some problems need to be addressed:
1. In the experiments, YOLOv5s is compared. But why not comparing with the latest yolo?
2. The three backbone networks involved in the Figure 1 of the paper are not specified in the text, which would limit other researchers to know the details.
3. In the line 22, "a MF network" should be "an MF network".
Technical Quality: 4
Clarity: 4
Questions for Authors: See weaknesses above.
Confidence: 5
Soundness: 4
Presentation: 4
Contribution: 4
Limitations: The authors note that current model validation relies on visible light and infrared modalities due to limited relevant datasets within the community. They express a need for new dataset guidelines and contributions to the open-source community to address multi-modal dataset validation challenges in the future.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: # Reviewer 6qWk
Thank you for your constructive insights.
**A1**: We acknowledge your point regarding comparing YOLOv5s with the latest version of YOLO. We chose YOLOv5s as it represents a well-established baseline in the field, and our focus was on evaluating the results of the detection accuracy of different fusion methods under the same detector. It is consistent with papers "CVPR2023 MetaFusion: Infrared and Visible Image Fusion via Meta-Feature Embedding from Object Detection" and "CVPR2022 Target-aware Dual Adversarial Learning and a Multi-scenario Multi-Modality Benchmark to Fuse Infrared and Visible for Object Detection. However, we will consider including a comparison with the latest YOLO version in future work to provide a more comprehensive evaluation.
**A2**: Thank you for pointing out this omission. We apologize for the oversight. In Figure 1, the three backbone networks represent the same backbone. The parameters included are defined as the shared parameters for MF and OD networks. We will clarify this in the revised manuscript to ensure that other researchers have the necessary details for replication and comparison purposes.
**A3**: Thank you for your corrections; we have made the necessary revisions.
---
Rebuttal Comment 1.1:
Comment: I would like to thank the authors for their response. My concerns have been solved, and I will keep my positive rating.
---
Reply to Comment 1.1.1:
Title: Official Comment by Authors
Comment: Thank you for your prompt comments and recognition of our paper. | Summary: This paper proposes a joint learning diagram for multimodal fusion and object detection with task alignment module.
The suggested network achieves SOTA performance with affordable computational cost.
Strengths: This paper presents a novel approach to learning image fusion and objection detection in a synchronous and joint way
The proposed network achieves the SOTA performance in both tasks.
Weaknesses: Globally, I am ok with the significance of the work with the SOTA performance, despite the fact that it comes with additional computational cost.
I am more concerned by other issues:
1. After reviewing, this paper gives me the impression that the proposed method is more like a combination of existing works/modules/tricks to achieve the SOTA performance. In other words, I am concerned by the novelty.
2. Secondly, this paper is hard to read and follow. The motivation of the proposed work is not strong enough compared to SOTA works. The diagrams are a little bit confusing. The writing needs to be improved.
3. I am ok with all the proposed modules and blocks to be claimed as novel, such as the blocks in nodes 1 and 2. It seems that the main contribution that the author claimed is on task alignment. This seems to be a very generic learning strategy. The authors should further validate its effectiveness with other works such as MetaFusion or other applications.
Technical Quality: 4
Clarity: 4
Questions for Authors: n/a
Confidence: 3
Soundness: 4
Presentation: 4
Contribution: 4
Limitations: See weakness
Flag For Ethics Review: ['No ethics review needed.']
Rating: 9
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: # Reviewer G1Ro
Thank you for your feedback.
**A1**: Reviewer 6qWk and reviewer cEzg both affirmed the novelty of our paper. As reviewer cEzg commented, "The idea of simultaneously learning image fusion (MF) and object detection (OD) tasks to mutually benefit each other is intriguing and reasonable." **It is important to note that the contribution of this paper lies in introducing the first end-to-end joint training paradigm in the fusion detection field.** In current research, joint learning algorithms are an emerging hotspot, leveraging fusion networks and object detection synergies to enhance image informativeness and improve detection performance. However, **existing** optimization methods are typically **non-end-to-end**, relying on multiple steps, as shown in Figure 1. These approaches are cumbersome and reduce training efficiency. Therefore, we propose the **first end-to-end synchronous joint optimization** algorithm, E2E-MFD, which facilitates interaction between intrinsic features from both domains through synchronous joint optimization, enabling a **streamlined, one-stage process**.
Moreover, our end-to-end solution does not merely concatenate two tasks (modules) but involves dedicated module design. To harmonize **fine-grained details** with **semantic information**, we introduce the novel concept of an Object-Region-Pixel Phylogenetic Tree (ORPPT) coupled with a coarse-to-fine diffusion processing (CFDP) mechanism. Additionally, we find that the MF and OD tasks have naturally distinct optimization objectives. MF primarily focuses on capturing pixel-level relationships between image pairs, while OD integrates object semantics within diverse scene contexts. Therefore, there exists an **inherent optimization barrier** between these two tasks. To address this, we especially introduce the concept of **gradient alignment** in the multi-task learning field, proposing GMTA to align gradients for o**ptimizing task dominance and resolving conflicting gradients** in the end-to-end joint training process. As you mentioned, we ultimately propose a novel approach to learning image fusion and objection detection synchronously and jointly. We are the first to innovatively **integrate image fusion and object detection** into a single-stage, end-to-end framework, achieving **SOTA results** on multiple datasets.
**A2**: As pointed out by reviewer 6qWk, “This paper presents the first attempt to achieve simultaneous single-stage training of image fusion and object detection, and the results appear very promising. Inspired by multitask learning, the GMTA method is introduced to reasonably balance the loss functions, thereby converging the fusion detection weights to optimal points.” As discussed in A1, previous SOTA methods relied on non-end-to-end optimization approaches, dividing joint training into multiple steps, which led to complexity during training. These methods excessively emphasized leveraging OD information for MF enhancement, complicating parameter balancing and making them susceptible to local optima of individual tasks. **Therefore, achieving a unified feature set that simultaneously satisfies the characteristics of each task through end-to-end training remains a formidable challenge.** This paper introduces E2E-MFD, an end-to-end multimodal fusion detection algorithm. E2E-MFD aims to seamlessly integrate detailed image fusion and object detection from coarse to fine levels. We introduce the gradient alignment concept from the multi-task learning domain, aiming to eliminate conflicting gradients between object detection and multimodal fusion tasks through the design of GMTA optimization mode. **By facilitating synchronous joint optimization and fostering interaction between intrinsic features from both tasks, E2E-MFD achieves a streamlined single-stage process in an end-to-end manner.** In addition, reviewers bix9, 6qWk, and cEzg both acknowledged our writing presentation. We will strive to improve our writing skills to the best of our ability. Please help us identify which diagrams have caused you confusion. We will make every effort to revise the diagrams and provide comprehensive explanations.
**A3**: Our goal is to explore an end-to-end joint training approach. Thank you for recognizing the **novelty** of the **design of node1 and node2**. In this framework, Node 1 serves the object detection task, while Node 2 acts as a module for MF tasks, with personalized settings harnessing the respective roles of the nodes. As you mentioned, one of our primary contributions is introducing **an end-to-end synchronous training paradigm** for multimodal fusion detection, where synchronized joint optimization allows both tasks to complement each other synergistically. This collaboration enables MF to generate richer, more informative images, enhancing the performance of OD, which in turn provides valuable semantic insights to MF for accurate localization and identification of objects within scenes. However, it is crucial to address the issue of **gradient conflicts** in the joint optimization of fusion and detection tasks, known as task consistency. This challenge, though a common concern in multitask learning, is effectively tackled for the **first time** in the realm of **multimodal fusion detection** through end-to-end training. As reviewer cEzg noted, “An end-to-end fusion detection algorithm is proposed, effectively avoiding the local optimum problem encountered in multi-stage training models.”
Additionally, it should be noted that the training approach of Metafusion is not end-to-end. Our algorithm essentially represents an advanced end-to-end version of “metafusion,” (i.e., a non-end-to-end image fusion method) taken to its ultimate level of refinement.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for the rebuttal. I don't have other questions.
---
Reply to Comment 1.1.1:
Title: To Reviewer G1Ro
Comment: Thank you for taking the time to reply. We are pleased to hear that we have addressed your concerns. If you have any further questions, please let us know promptly so that we can resolve them in the remaining time. We hope you will reconsider our score. Thank you again. | Summary: This paper proposed an end-to-end algorithm for multimodal fusion detection, experiments on fusion and detection tasks showed the better performance than some methods.
Strengths: This paper proposed an end-to-end algorithm with one-stage training process,for multimodal fusion detection, experiments on fusion and detection tasks showed the better performance than some methods.
Weaknesses: 1. Why use the V channel in the HSV space of the fusion results to calculate the metrics?
2. The best result of car detection highlighted in table 2 is wrong. Additionally, add analysis of why the proposed algorithm couldn’t realize the best detection effect of car.
3. Provide a detailed justification for the chosen datasets. Explaining why these specific datasets are representative or challenging.
4. The details of GMTA should be added.
5. It is mentioned that the GMTA operation is executed every 1000 iterations, but more specific implementation details, such as the specific setting and selection basis of parameters, are lacking.
6. Ablation studies for CFDP should be added to verify how CFDP impacts the final results.
7. The advantages of ORPPT and GMTA compared with existing techniques are not fully demonstrated. It needs to be more explicit about how these innovations solve existing problems or lead to performance improvements.
8. More SOTA methods and metrics should be added for image fusion task.
Technical Quality: 2
Clarity: 3
Questions for Authors: See the weaknesses.
Confidence: 5
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: See the weaknesses.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: # Reviewer bix9
Thanks for your comments.
**A1:** This is a common default setting in the field such as "CVPR23 MetaFusion". V (brightness) channel can effectively measure the algorithm’s ability to handle low-light environments.
**A2:** We have made revision in Tab. 2. It's common to observe fluctuations in detection accuracy within a single category on M3FD dataset in Tab. R.1. Despite fluctuations, our algorithm significantly outperforms others in $\text{mAP}$.
**A3:** We utilize widely recognized datasets with diverse tasks, sufficient quantity, and complex environment. TNO and Roadscene are evaluation datasets for MF. TNO captures multispectral images day and night, while Roadscene comprises aligned image pairs from road scenes with vehicles and pedestrians. We also enrich experimental validation by integrating horizontal and oriented OD datasets. The M3FD dataset covers complex scenarios with diverse camera angles. Additionally, DroneVehicle dataset is considered for oriented OD with diverse scenes from an aerial viewpoint.
**A4:** The process of GMTA is illustrated in Section 3.4 in the paper and we detailed this section to eliminate the confusion. GMTA mitigates the undesired effects of the optimization barrier in task-shared parameters which are supposed to be balanced between the MF and OD tasks. Conflicts in the gradient matrix $\boldsymbol{G}$ are related to a negative cosine distance between gradient vectors ($<\boldsymbol{g}_u,\boldsymbol{g}_d><0$), while dominance is caused by an imbalance of their magnitudes ($\Vert \boldsymbol{g}_u \Vert \gg \Vert \boldsymbol{g}_d \Vert$ or $\Vert \boldsymbol{g}_u \Vert \ll \Vert \boldsymbol{g}_d \Vert$).
Learning from the Aligned-MTL (multi-task learning), the condition number is optimal ($\kappa(\boldsymbol{G})=1$) if and only if the gradients are orthogonal and equal in magnitude which means that the system of gradients has no dominance or conflicts:
$$\kappa(\boldsymbol{G})=1 \iff <\boldsymbol{g}_u,\boldsymbol{g}_d>=1.$$
The final gradient linear system $\hat{\boldsymbol{G}}$ satisfies the optimal condition by the condition number. Thus, the feature learning constraint $\mathcal{S}(\boldsymbol{ \theta}^{\star})$ can be defined as the following optimization to eliminate training instability:
$$ \min _{\hat{\boldsymbol{G}}}\|\boldsymbol{G}-\hat{\boldsymbol{G}}\|_F^2 \quad \text { s.t. }\kappa(\hat{\boldsymbol{G}})=1 \iff \min _{\hat{\boldsymbol{G}}}\|\boldsymbol{G}-\hat{\boldsymbol{G}}\|_F^2 \quad \text { s.t. }\hat{\boldsymbol{G}}^{\top} \hat{\boldsymbol{G}}=\boldsymbol{I}.$$
The problem can be treated as a Procrustes problem and can be solved by performing a singular value decomposition (SVD) to $\boldsymbol{G}$ ($\boldsymbol{G}=\boldsymbol{U} \boldsymbol{\Sigma} \boldsymbol{V}^\top$) and rescaling singular values corresponding to principal components so that they are equal to the smallest singular value:
$$
\hat{\boldsymbol{G}}=\sigma \boldsymbol{U} \boldsymbol{V}^{\top}=\sigma \boldsymbol{G} \boldsymbol{V} \boldsymbol{\Sigma}^{-1} \boldsymbol{V}^\top,
$$
where,
$$(\boldsymbol{V}, \boldsymbol{\lambda})= eigh(\boldsymbol{G}^{\top}\boldsymbol{G}),$$
$$\boldsymbol{\Sigma}^{-1} = diag(\sqrt{1/\lambda_{max}},\sqrt{1/\lambda_{min}}).$$
$eigh$ finds eigenvectors $\boldsymbol{V}$ and eigenvalues $\boldsymbol{\lambda}$, with $diag$ representing a diagonal matrix. $\lambda_{max}$ and $\lambda_{min}$ are the maximum and minimum eigenvalues of $\boldsymbol{\lambda}$. The stability criterion, defined by the condition number, sets a linear system to an arbitrary position scale. To resolve this ambiguity, we select the largest scale ensuring convergence to the optimum, corresponding to the minimal singular value of the initial gradient matrix:
$$\sigma=\sigma_{\min }(\boldsymbol{G})=\sqrt{\lambda_{min}}.$$
**A5:** The GMTA process operates during the computation and updating stages of two gradients, approximately every $n$ iteration (gradient update), to balance the independence and coherence of various tasks. Tab. R.2 presents the ablation analysis of the $n$ parameter. Detail analysis is in **Experiment Results part of Global Author Rebuttal**.
**A6:** We have added ablation experiments on CFDP, involving whether to use CFDP and the number of proposed boxes, as shown in Tab. R.3. Detail analysis is in **Experiment Results part of Global Author Rebuttal**.
**A7:** The paper designs the first end-to-end joint training paradigm in MFD. E2E-MFD enhances interaction between intrinsic features through synchronous joint optimization, streamlining the process. Our solution goes beyond task concatenation, incorporating dedicated module design. Inspired by a phylogenetic tree analogy, we employ an ORPPT to extract features across multiple region scales. By utilizing PFMM and $L$ RFRM branches, we capture various granularities from coarse to fine. Tab. 5 validates our approach, while Fig. 6 underscores the importance of effectively integrating pixel-level and object-level details.
MF primarily focuses on pixel-level relationships between image pairs, while OD integrates object semantics within diverse scene contexts. This inherent optimization barrier between the tasks necessitates a solution. We propose GMTA, a gradient alignment concept within the multi-task learning framework, to optimize task dominance and resolve conflicting gradients in end-to-end joint training. Results in Tab. 4 demonstrate that GMTA, with shared weights, yields superior performance. Comparison with other methods in Tab. 6 further validates our approach. Fig. 5 illustrates how GMTA balances shared parameters between MF and OD, effectively mitigating gradient dominance and conflict.
**A8:** In Tab. R.4, three recent fusion SOTA methods (CVPR2024 SHIP, PR2024 CFNet, and PR2024 DSFusion) and three evaluation metrics (Qabf, PSNR, and SSIM) are incorporated to valid effectiveness of our method. Detailed analysis is illustrated in **Author Rebuttal Experiment Results**.
---
Rebuttal Comment 1.1:
Title: Follow-up on discussion
Comment: Dear Reviewer bix9.
We hope that our rebuttals have clarified your concerns. if there are any specific analyses or complementary experiment that could clear your doubts, we would be happy to try and provide them. We sincerely thank you again for your time and feedback. | Rebuttal 1:
Rebuttal: # Global Author Rebuttal
We thank the reviewers for their comments. We are encouraged that the reviewers appreciate the **sound technology** (6qWk, cEzg), **well-organized writing** (bix9, 6qWk, cEzg), **certain influence** (bix9, 6qWk, cEzg), **clear motivation** (bix9, 6qWk, cEzg), and **excellent experimental performance** (bix9, G1Ro, 6qWk, cEzg). All suggestions were seriously considered and we will carefully revise the manuscript. We address each reviewer in individual comments.
## Motivation and Novelty.
The paper introduces the first end-to-end joint training paradigm in the MFD field, as acknowledged by reviewer comments: reviewer cEzg finds the idea of simultaneous learning intriguing and reasonable, reviewer 6qWk notes it’s the first attempt at simultaneous single-stage training, and reviewer G1Ro acknowledges the novel approach presented in the paper.
Current research aims to enhance image informativeness and detection performance through joint learning algorithms integrating MF and OD networks. However, existing non-end-to-end optimization methods involve cumbersome multiple steps that reduce training efficiency. Therefore, we propose E2E-MFD as the first end-to-end joint training paradigm in the MFD field. E2E-MFD facilitates interaction between intrinsic features from both domains through a streamlined, one-stage process. Our approach includes dedicated module design, such as the novel ORPPT and CFDP mechanism, to effectively balance and integrate fine-grained details with semantic information at pixel and object levels. Additionally, we found the gradient conflict problem in synchronous training for the first time and designed GMTA to optimize task dominance and resolve conflicting gradients. Comprehensive experiments show that E2E-MFD is superior to the existing pipeline.
## Experimental Results.
According to the suggestions of **reviewer bix9**, we have added the ablation experiments and detailed information is available in the **pdf document**:
A study of fluctuations in single category detection accuracy M3FD dataset is shown **Table R.1**. We prove that it is common to observe fluctuations in detection accuracy within a single category on the M3FD dataset. Despite fluctuations, our algorithm significantly outperforms others in overall mAP.
Table R.1: A study of fluctuations in single category detection accuracy.
| Method | People | Car | Bus | Motorcycle | Lamp | Truck | $ \text{mAP} _ {50} $ | $ \text{mAP} _ {50:95} $ |
| :-: | :-: | :-: | :-: | :-: | :-: | :-: | :-: | :-: |
| U2Fusion | 47.5 | 69.3 | 73.0 | 43.7 | 44.9 | 62.2 | 86.9 | 56.8 |
| U2Fusion | 47.7 | **70.1** | 73.2 | 43.2 | 44.6 | 63.9 | 87.1 | 57.1 |
| E2E-MFD | **60.1** | 69.5 | **81.4** | **52.2** | **47.6** | **72.2** | **91.8** | **63.8** |
The GMTA is performed approximately every $n$ iteration (gradient update), focusing on balancing the independence and coherence of various tasks. **Table R.2** presents the ablation analysis of the $n$ parameter of GMTA. Decreasing $n$ initially disrupts task optimization due to frequent alignment, while increasing $n$ becomes crucial when the network determines task optimization directions. However, excessively large $n$ leads to significant deviations in task paths, making alignment more challenging and negatively impacting performance.
Table R.2: Ablation studies of the iteration parameter $n$.
| $n$ | EN | MI | VIF | $\text{mAP} _ {50}$ | $\text{mAP} _ {50:95}$ |
|:-:|:-:|:-:|:-:|:-:|:-:|
| 500 | 6.17 | 15.05 | 1.58 | 90.93 | 62.93 |
| 1000 | **6.36** | **15.47** | **1.65** | **91.80** | **63.83** |
| 1500 | 6.24 | 15.08 | 1.62 | 91.10 | 63.16 |
| 2000 | 6.13 | 14.69 | 1.45 | 90.35 | 62.75 |
We conducted ablation experiments on CFDP in **Table R.3**, investigating its inclusion and the number of proposed boxes. In the setting without CFDP, we maintained the backbone network while substituting CFDP with RPN(Region Proposal Network), standard components in two-stage object detectors. Results indicate that CFDP enhances detailed information capture and precise box guidance, thereby enhancing fusion image quality and detection performance. For optimal balance between performance and efficiency, we selected 500 proposal boxes.
Table R.3: Ablation studies of the CFDP.
| Settings | Proposal boxes | EN | MI | VIF | $\text{mAP} _ {50}$ | $\text{mAP} _ {50:95}$ | Tr.Time |
|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|
| w/o CFDP | 500 | 6.07 | 14.78 | 1.58 | 90.13 | 61.98 | 2h52m11s |
| w CFDP | 300 | 6.23 | 14.97 | 1.60 | 90.89 | 63.29 | 2h23m45s |
| w CFDP | 500 | 6.36 | **15.47** | **1.65** | 91.80 | **63.83** | 2h50m32s |
| w CFDP | 1000 | **6.37** | 15.34 | 1.63 | **92.05** | 63.75 | 3h32m30s |
In **Table R.4**, three recent fusion SOTA methods (CVPR2024 SHIP, PR2024 CFNet, and PR2024 DSFusion) and three evaluation metrics (Qabf, PSNR, and SSIM) on the three datasets (M3FD, TNO, and RoadScene) are incorporated to valid effectiveness of our method. Compared with other SOTA methods, E2E-MFD achieved superior performance across multiple metrics.
Table R.4: Quantitative results of different fusion methods.
| Method| EN ↑ | MI ↑ | VIF ↑ | Qabf ↑ | PSNR ↑ | SSIM ↑ |
|:-:|:-:|:-:|:-:|:-:|:-:|:-:|
| CFNet | 5.64 | 13.97 | 1.54 | 0.44 | 27.91 | 1.24 |
| DSFusion | 5.93 | 13.95 | 1.57 | 0.45 | 28.12 | 1.34 |
| SHIP | 6.19 | 15.02 | 1.61 | 0.50 | 29.25 | 1.38 |
| E2E-MFD | **6.36** | **15.47** | **1.65** | **0.51** | **30.01** | **1.42** |
Pdf: /pdf/4c7a6882ab0535047acaf6321593e21369c2fce6.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Mitigating the Impact of Labeling Errors on Training via Rockafellian Relaxation | Reject | Summary: This paper proposes a methodology called Rockafellian Relaxation (RR) to mitigate the impact of labeling errors in neural network training. The method is architecture-independent and integrates concepts from adversarial training to address dataset imperfections robustly. Through theoretical justifications and a series of experiments on standard datasets like MNIST and Toxic Comments, the paper demonstrates that RR can significantly improve the performance of neural networks trained under various corruption levels. The paper’s contributions are particularly valuable as they provide a new tool for improving training accuracy in the presence of label noise, enhancing the robustness and applicability of machine learning models in diverse and error-prone real-world settings.
Strengths: Originality: The paper's approach to using Rockafellian Relaxation for addressing labeling errors is innovative, especially the combination with adversarial training concepts.
Quality: The method is grounded in solid theoretical justification, and the empirical results show marked improvements over existing methods.
Clarity: The explanations of the methodologies and the algorithms are clear and detailed, making it easier to understand the operational aspects of the proposed solution.
Significance: The significance of this work lies in its potential to improve training robustness across various domains and dataset imperfections, which is highly relevant for deploying machine learning models in error-prone real-world environments.
Weaknesses: Computational Complexity: The added complexity might limit the practical application of the method in scenarios with constrained computational resources.
Technical Quality: 3
Clarity: 3
Questions for Authors: Scalability: How does the RRM scale in terms of computational cost and effectiveness with larger and more complex datasets?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Generalization to Different Noise Types: While the method is tested against uniform label noise, its effectiveness against other types of noise is not thoroughly investigated.
Dependence on Hyperparameter Tuning: The effectiveness of RRM is likely sensitive to the choice of hyperparameters, such as the regularization term and the parameters controlling the adversarial component. The paper does not provide extensive guidance on hyperparameter selection, which could affect the reproducibility and ease of application in different scenarios.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ## **Question**
Scalability: How does the RRM scale in terms of computational cost and effectiveness with larger and more complex datasets?\\
## **Rebuttal**
Each iteration of RRM, as outlined in Algorithm 1 on page 5, is comprised of two tasks: (1) a gradient step and (2) a poly-size (in training data size) linear program that constitutes loss-reweighting. As task (1)'s gradient step is entirely standard practice, and task (2)'s complexity is poly-time in the training data size, the scaling of computation with larger datasets is not limiting in nature. As a side note, the linear program of (2) has a structure that can be exploited for fast computation. We are happy to add such a discussion on these complexity matters, if given the opportunity.
## **Question**
Generalization to Different Noise Types: While the method is tested against uniform label noise, its effectiveness against other types of noise is not thoroughly investigated.
## **Rebuttal**
Although we explicitly state the use of uniform label noise in Section 3.1, which is indeed a very common scheme in the literature, we clarify that our analysis in fact did not rely on this assumption. Towards providing you insight into the non-uniform case, we have repeated the experiments on MNIST that produced Table 1, but now with non-uniform label noise. More precisely, after uniformly randomly selecting $C$ percent of the training pairs, we proceed to contaminate the label $y_i$ in each pair $(x_i, y_i)$ in the following non-uniform manner, as outlined below in the transition kernel matrix of (True Label, Contaminated Label) entries. For example, as the matrix below indicates, if the true label $y_i = 5$, then instead of uniformly randomly drawing an alternative digit $\tilde{y}_i$ from among $\{0, 1, \ldots, 9\} \setminus \{5\}$, we have
$\tilde{y}_i =$ \
0 w.p. 0.051\
1 w.p. 0.017 \
2 w.p. 0. \
3 w.p. 0.627 \
$\ldots$
| True \ Contaminated | 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 |
|---------------------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|
| 0 | 0 | 0.077 | 0.077 | 0.154 | 0 | 0.077 | 0.385 | 0 | 0.154 | 0.077 |
| 1 | 0 | 0 | 0.333 | 0.111 | 0 | 0.111 | 0.111 | 0 | 0.333 | 0 |
| 2 | 0.097 | 0.065 | 0 | 0.258 | 0.032 | 0 | 0.097 | 0.194 | 0.258 | 0 |
| 3 | 0 | 0 | 0.125 | 0 | 0 | 0.125 | 0 | 0.125 | 0.625 | 0 |
| 4 | 0.111 | 0.037 | 0.074 | 0.074 | 0 | 0.074 | 0.222 | 0.037 | 0.111 | 0.259 |
| 5 | 0.051 | 0.017 | 0 | 0.627 | 0.017 | 0 | 0.153 | 0 | 0.102 | 0.034 |
| 6 | 0.235 | 0.176 | 0.059 | 0.059 | 0.059 | 0.176 | 0 | 0 | 0.235 | 0 |
| 7 | 0.050 | 0.225 | 0.200 | 0.125 | 0 | 0 | 0 | 0 | 0.200 | 0.200 |
| 8 | 0.107 | 0.036 | 0.107 | 0.357 | 0.107 | 0.071 | 0.071 | 0.107 | 0 | 0.036 |
| 9 | 0.064 | 0.170 | 0 | 0.213 | 0.170 | 0.170 | 0.021 | 0.085 | 0.106 | 0 |
These entries were generated by the confusion matrix of an imperfect MNIST classifier.
The results from this new experiment confirm the performance benefits that were observed (compare to Table 1) under conditions of uniform label contamination.
| $\epsilon_{test}$\C | 0% AT | 0% A-RRM | 5% AT | 5% A-RRM | 10% AT | 10% A-RRM | 20% AT | 20% A-RRM | 30% AT | 30% A-RRM |
|-|-|-|-|-|-|-|-|-|-|-|
| 0 | 96.5 | **97.3** | 93.6 | **95.6** | 60.4 | **87.8** | 32.2 | **92.4** | 58.3 | **89.2** |
| 0.1 | 93.4 | **95.2** | 89.3 | **92.4** | 63.2 | **84.7** | 42.5 | **89.5** | 56.1 | **81.7** |
| 0.25 | 92.4 | **93.1** | 87.9 | **90.6** | **86.9** | 86.3 | 80.6 | **89.3** | 69.0 | **79.8** |
| 0.5 | **92.0** | 90.9 | **90.3** | 89.8 | **94.4** | 91.2 | **92.9** | 89.2 | **85.2** | 81.6 |
| 1 | **89.4** | 85.5 | **90.3** | 86.8 | **94.9** | 92.6 | **93.9** | 86.6 | **81.8** | 77.8 |
## **Question**
Dependence on Hyperparameter Tuning: The effectiveness of RRM is likely sensitive to the choice of hyperparameters, such as the regularization term and the parameters controlling the adversarial component. The paper does not provide extensive guidance on hyperparameter selection, which could affect the reproducibility and ease of application in different scenarios.
## **Rebuttal**
Qualitatively, across the four diverse scenarios that we experimented, we found the hyper-parameters search non-burdensome, and yielded beneficial RRM performance. In particular, we follow standard practice of leveraging a validation set for hyperparameter selection. In the discussion following Theorem 3.1, we provide insight into how $\gamma$ may be tuned in relation to an estimate $\alpha$ of the labeling error in the dataset. If invited we can quantify more precisely the sensitivity in performance changes due to hyper-parameter changes. | Summary: This work proposes a loss reweighting scheme to train models in the presence of label errors. When training an NN with empirical risk minimization in this setting, one would want to assign a weight of zero to all datapoints that are mislabeled and a weight of one to all datapoints that are correctly labeled. This paper presents an automated method for accomplishing this weighting, called the Rockafellian Relaxation Method (RRM). It is noted in Theorem 3.1 that the inner minimization objective of RRM reduces to a linear programming problem, despite RRM being non convex in general. After relating RRM to distributionally robust optimization techniques, the adversarial variant of RRM is introduced (A-RRM), which includes adversarial perturbations to induce adversarial training as well as loss reweighting. Experiments on four datasets show that RRM and A-RRM outperform other methods in both adversarial settings and settings with high proportions of noisy labels.
Strengths: This work addresses two different types of robustness: robustness to label noise and robustness to adversarial feature perturbation. It should be of interest to those who are generally interested in robust and trustworthy machine learning. Furthermore, the proposed training method has strong theoretical foundations, and its relation to other optimization formulations is discussed in detail. The theoretical results are validated in experiments that cover different datasets and types of data corruption.
Weaknesses: The experimental section lacks a relevant baseline for comparison. As it stands, it is unclear how this compares to other noise-reduction techniques. The relationship to other techniques is discussed in the related work section, it would be nice if the purported benefits of this approach were borne out empirically.
The introduction of adversarial training in section 3.5 is under-motivated. Based on the earlier sections, it is unclear how label and adversarial feature corruptions are related to each other, why we would want to achieve robustness to both, and whether previous approaches have attempted this before. I would suggest explicitly motivating this earlier in the paper.
Technical Quality: 3
Clarity: 2
Questions for Authors: - Could you expand on what you mean by RRM producing sparse weight vectors? Does assigning zero weight to data points with high losses result in sparsity in the parameter space?
- What is the reason for choosing the FGSM attack in A-RRM rather than a different attack, like PGD? Could other attacks be used in the place of FGSM?
- In Table 1, is the epsilon in the fourth row supposed to be 0.50 instead of 50?
- Do you have a hypothesis for why the test accuracy increases for models trained with AT and tested with $\epsilon_{test} = 0.50$ or $1.0$ when the level of corruption is increased (i.e. the AT columns in the bottom two rows of Table 1)? That seems like an unexpected result that may warrant further study?
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: The limitations are briefly discussed in the paper. As noted above, one main limitation is that it only studies $\ell_\infty$ bounded FGSM attacks. Furthermore, this paper only considers the uniform label noise model, and does not consider the case when label corruption might be correlated with features.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ## **Question**
Could you expand on what you mean by RRM producing sparse weight vectors? Does assigning zero weight to data points with high losses result in sparsity in the parameter space?
## **Response**
In equation (3), the expressions $(\frac{1}{N} + u_i)$ are to be understood as the weight given to the $i$-th training sample. Hence, if we consider the weight vector $(\frac{1}{N} + u_i)_{i=1}^N$, it is "sparse" if there are many $u_i$ set to $-\frac{1}{N},$ equiv., when the weight vector carries many zero-value entries. Thus, by our comment that "RRM produces sparse weight vectors by assigning zero weight to data points with high losses", we mean to say that those samples that present sufficiently high losses are removed from consideration.
We may have caused confusion surrounding "sparsity in the parameter space" with our statement: "...while lasso produces sparse solutions in the model parameter space, RRM produces sparse weight vectors by assigning zero weight to data points with high losses." In truth, sparsity in the (model) parameter(s) $\theta$ was not something we observed. Our intent was to contrast the sparsity in the weight vectors $(\frac{1}{N} + u_i)_{i=1}^N$ of RRM with the sparsity that occurs in the model parameters of lasso-regularized, linear regression (i.e. the linear coefficients).
## **Question**
What is the reason for choosing the FGSM attack in A-RRM rather than a different attack, like PGD? Could other attacks be used in the place of FGSM?
## **Response**
Were it not for the 9-page limit, we would have catalogued more attacks. Certainly, attacks like PGD could be used in place of FGSM. A good direction to take for follow-on work!
## **Question**
In Table 1, is the epsilon in the fourth row supposed to be 0.50 instead of 50?
## **Response**
Yes, thanks for the catch!
## **Question**
Do you have a hypothesis for why the test accuracy increases for models trained with AT and tested with $\epsilon_{test} = 0.50$ or $1.0$ when the level of corruption is increased (i.e. the AT columns in the bottom two rows of Table 1)? That seems like an unexpected result that may warrant further study?
## **Response**
Upon scanning the $\epsilon_{test} = 1.00$ row of Table 1, we see that the AT test accuracy numbers read from left-to-right (corresponding to increasing corruption level) as 86, 95, 94, 88, 98. We're not certain that this necessarily indicates that the test accuracy is increasing with corruption level. We have a similar perspective in the case of $\epsilon_{test} = 0.50$.
However, for some possible explanations of AT's perhaps unexpected performance$\ldots$
- Stochastic nature of model training (e.g. shuffling the data, initializing of parameters, number of iterations, etc.)
- The training perturbation level $\epsilon_{train} = 1$, so perhaps the closer $\epsilon_{test}$ gets to 1, the more similar the training and testing environments become, perhaps explaining the increase in AT accuracy with increasing $\epsilon_{test}$. We certainly see that conversely, as $\epsilon_{test}$ decreases to 0, AT's accuracy relative to RRM plummets. Given that it may be difficult to anticipate the test perturbation $\epsilon_{test}$, RRM can provides a perturbation-robust alternative to AT.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response to my questions. I appreciate the empirical comparison to ELR in the response to reviewer UZpK. However, I still believe that a more thorough comparison is needed, as these results are fairly inconclusive and you state that a more thorough hyperparameter search is necessary. I will therefore be keeping my current score.
---
Reply to Comment 1.1.1:
Title: On the Matter of ELR versus RRM-wrapped ELR
Comment: Towards addressing your concern over the inconclusiveness of our comparison, due to a lack of hyperparameter search, we have endeavored to revisit the experiments, now with a hyperparameter search, to show how RRM-wrapping a loss methodology can enhance performance. The table below reports the test accuracy results obtained upon revisiting the experiments on MNIST3 with a hyperparameter search for all methods.
| | | | |
|----------|----------------------|----------------------|----------------------|
| **Method** \ **Contamination Level** | 0.55 | 0.60 | 0.65 |
| ERM | 0.90 | 0.77 | 0.46 |
| RRM(ERM) | **0.98** | **0.96** | **0.67** |
| ELR | 0.98 | 0.97 | 0.82 |
| RRM(ELR) | **0.99** | **0.98** | **0.87** |
We highlight that the RRM-wrapped ERM method was uniformly better than ERM across all contamination levels examined. We observed this uniform enhancement for the case of ELR as well. Of particular note, once we tuned **ELR**'s hyperparameters, those same hyperparameter settings were maintained in **RRM(ELR)** (along with the RRM-specific hyperparameters tuned).
**These results suggest that even after hyperparameter tuning of a method like ERM or ELR, there are further enhancements to be obtained with RRM-wrapping.** | Summary: The paper presents Rockafellian Relaxation (RR), a new method to address labeling errors in machine learning datasets. RR is a loss reweighting technique that enhances neural network robustness against labeling errors and adversarial attacks, working across various data domains and model architectures. The key contribution is an approach that mitigates label corruption and class imbalance without needing clean validation sets, offering a practical solution for training robust models.
Strengths: - The paper introduces Rockafellian Relaxation (RR), a novel loss reweighting methodology that addresses learning with noisy label problems
- The authors provide a solid theoretical basis for RR, relating it to optimistic and robust distributional optimization formulations. RR is also designed to be architecture-independent, making it a versatile tool applicable across different neural network architectures.
- The method does not rely on having clean validation data, which is of advantage in many real-world applications.
Weaknesses: - While not explicitly mentioned, the iterative nature of the RR algorithm could potentially be computationally intensive, especially for large datasets.
- The method assumes a specific model of label noise (e.g., uniform label noise), which may not hold in all real-world scenarios.
- The paper could benefit from a more comprehensive comparison with other state-of-the-art methods for handling noisy labels, such as GCE [1], ELR[2], to better position RR in the existing literature.
[R1] Generalized Cross Entropy Loss for Training Deep Neural Networks with Noisy Labels
[R2] Early-Learning Regularization Prevents Memorization of Noisy Labels
Technical Quality: 3
Clarity: 2
Questions for Authors: - Could the authors provide insights into the computational complexity of the RR algorithm, particularly in the context of large-scale datasets such as clothing1m?
- How does the performance of RR compare under different models of label noise, especially those that deviate from the assumed uniform label noise model such as asymmetric/instance-dependent label noise?
- How does RR perform relative to other state-of-the-art methods like Generalized Cross Entropy (GCE) and Early-Learning Regularization (ELR) in terms of handling noisy labels?
- How sensitive is the performance of RR to the choice of hyperparameters, and are there any techniques to optimize these selections effectively?
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: Authors have adequately addressed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: # Question
Could the authors provide insights into the computational complexity of the RR algorithm...?
# Response
Each iteration of RRM, as outlined in Algorithm 1 on page 5, is comprised of two tasks: (1) a gradient step and (2) a poly-size (in training data size) linear program that constitutes loss-reweighting. As task (1)'s gradient step is entirely standard practice, and task (2)'s complexity is poly-time in the training data size, the scaling of computation with larger datasets is not limiting in nature. As a side note, the linear program of (2) has a structure that can be exploited for fast computation. We are happy to add such a discussion on these complexity matters, if given the opportunity.
# Question
How does the performance of RR compare under different models of label noise, ...such as asymmetric/instance-dependent label noise?
# Response
Although we explicitly state the use of uniform label noise in Section 3.1, which is indeed a very common scheme in the literature, we clarify that our analysis in fact did not rely on this assumption. Towards providing you insight into the non-uniform case, we have repeated the experiments on MNIST that produced Table 1, but now with non-uniform label noise. More precisely, after uniformly randomly selecting $C$ percent of the training pairs, we proceed to contaminate the label $y_i$ in each pair $(x_i, y_i)$ in the following non-uniform manner, as outlined below in the transition kernel matrix of (True Label, Contaminated Label) entries. For example, as the matrix below indicates, if the true label $y_i = 5$, then instead of uniformly randomly drawing an alternative digit $\tilde{y}_i$ from among $\{0, 1, \ldots, 9\} \setminus \{5\}$, we have
$\tilde{y}_i =$ \
0 w.p. 0.051\
1 w.p. 0.017 \
2 w.p. 0. \
3 w.p. 0.627 \
$\ldots$
| True \ Contaminated | 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 |
|---------------------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|
| 0 | 0 | 0.077 | 0.077 | 0.154 | 0 | 0.077 | 0.385 | 0 | 0.154 | 0.077 |
| 1 | 0 | 0 | 0.333 | 0.111 | 0 | 0.111 | 0.111 | 0 | 0.333 | 0 |
| 2 | 0.097 | 0.065 | 0 | 0.258 | 0.032 | 0 | 0.097 | 0.194 | 0.258 | 0 |
| 3 | 0 | 0 | 0.125 | 0 | 0 | 0.125 | 0 | 0.125 | 0.625 | 0 |
| 4 | 0.111 | 0.037 | 0.074 | 0.074 | 0 | 0.074 | 0.222 | 0.037 | 0.111 | 0.259 |
| 5 | 0.051 | 0.017 | 0 | 0.627 | 0.017 | 0 | 0.153 | 0 | 0.102 | 0.034 |
| 6 | 0.235 | 0.176 | 0.059 | 0.059 | 0.059 | 0.176 | 0 | 0 | 0.235 | 0 |
| 7 | 0.050 | 0.225 | 0.200 | 0.125 | 0 | 0 | 0 | 0 | 0.200 | 0.200 |
| 8 | 0.107 | 0.036 | 0.107 | 0.357 | 0.107 | 0.071 | 0.071 | 0.107 | 0 | 0.036 |
| 9 | 0.064 | 0.170 | 0 | 0.213 | 0.170 | 0.170 | 0.021 | 0.085 | 0.106 | 0 |
These entries were generated by the confusion matrix of an imperfect MNIST classifier.
The results from this new experiment confirm the performance benefits that were observed (compare to Table 1) under conditions of uniform label contamination.
| $\epsilon_{test}$\C | 0% AT | 0% A-RRM | 5% AT | 5% A-RRM | 10% AT | 10% A-RRM | 20% AT | 20% A-RRM | 30% AT | 30% A-RRM |
|-|-|-|-|-|-|-|-|-|-|-|
| 0 | 96.5 | **97.3** | 93.6 | **95.6** | 60.4 | **87.8** | 32.2 | **92.4** | 58.3 | **89.2** |
| 0.1 | 93.4 | **95.2** | 89.3 | **92.4** | 63.2 | **84.7** | 42.5 | **89.5** | 56.1 | **81.7** |
| 0.25 | 92.4 | **93.1** | 87.9 | **90.6** | **86.9** | 86.3 | 80.6 | **89.3** | 69.0 | **79.8** |
| 0.5 | **92.0** | 90.9 | **90.3** | 89.8 | **94.4** | 91.2 | **92.9** | 89.2 | **85.2** | 81.6 |
| 1 | **89.4** | 85.5 | **90.3** | 86.8 | **94.9** | 92.6 | **93.9** | 86.6 | **81.8** | 77.8 |
# Question
How does RR perform relative to other state-of-the-art methods like Generalized Cross Entropy (GCE) and Early-Learning Regularization (ELR) in terms of handling noisy labels?
# Response
Although ELR also strives to handle noisy labels, we emphasize that RRM is a wrapper that can be paired with loss-optimization methodologies like ELR. We perform experiments on MNIST-3 (digits 0,1, and 2 only) to illustrate the effectiveness RRM wrapping. Test accuracy results are posted below.
| | | | |
|-|-|-|-|
| **Method** \ **Contamination Level** | 0.55 | 0.60 | 0.65 |
| ERM | 0.899 | 0.769 | 0.455 |
| RRM(ERM) | 0.977 | 0.960 | **0.771** |
| ELR | **0.987** | 0.978 | 0.545 |
| RRM(ELR) | 0.973 | **0.984** | 0.433 |
As the table shows, RRM wrapped around ERM (empirical risk minimization) improves test accuracy across contamination levels. Similarly, ELR wrapped with RRM improves test accuracy over ELR-alone, for a label contamination level of .6. While this does not appear to hold for levels 0.55 and 0.65, it is possible that given more time, a more thorough hyperparameter search may yield similar benefits.
As for GCE, this is also effectively a loss-reweighting method of sorts like RRM. Indeed, in equation (13) of GCE[1], each training example is given a weight of 1 or 0 - a byproduct of their choice of truncated, negative Box-Cox loss to be optimized. We note that RRM can reweight with more nuance however - in particular, training examples can be given weights between 0 and 1.
# Question
How sensitive is the performance of RR to the choice of hyperparameters...?
# Response
Qualitatively, we find the hyper-parameters search non-burdensome, which yields beneficial RRM performance. In the discussion following Theorem 3.1, we provide insight into how $\gamma$ may be tuned in relation to an estimate $\alpha$ of the labeling error in the dataset. If invited we can quantify more precisely the sensitivity in performance changes due to hyper-parameter changes.
---
Rebuttal Comment 1.1:
Comment: Thank you to the authors for addressing my questions. I have decided to maintain my score, which leans toward acceptance.
I suggest that in the revised version, the authors include additional experiments beyond MNIST, such as CIFAR, CIFAR-N, and Clothing-1M, to further support the efficacy of the proposed method. | null | null | Rebuttal 1:
Rebuttal: We thank the reviewers for their time and input! We have provided a separate rebuttal to each of you, and hope that we have addressed all questions/concerns. Looking forward to engaging with you in the discussion period to come. | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
PMechRP: Interpretable Deep Learning for Polar Reaction Prediction | Reject | Summary: This paper attempts to address the problem of low interpretability of reaction prediction methods by proposing modeling step-wise polar reactions. To model such mechanisms it uses an existing dataset PMechDB. The authors propose an approach to model such reaction by first selecting the right atoms to react from the input molecules using learned models and then react them.
Strengths: - Several existing chemistry reaction prediction models are benchmarked on the PMechDB dataset.
- A way to integrate reaction mechanism information is introduced.
Weaknesses: - The paper has substantial clarity problems:
- Table captions are insufficiently informative, requires going deeper into the text to understand what results are actually presented (e.g. 'Table 3: Top-N Accuracy of Trained Models').
- Figures 5 and 6 are formatted inconsistently with the rest of the file.
- Citation quality is poor:
- Could provide more references to prior work overall. e.g. section 3.3 describes prior work on sequence to sequence modelling without any references.
- PMechDB is introduced in a way that makes it unclear, whether the database is a contribution of this work or not.
- Novelty is not prominent. Method in [18] (OrbChain) is already working with similar task on a similar dataset.
- Evaluation is insufficient:
- Source code for reproduction has not been provided.
- The resulting models have not been evaluated on the global datasets, making it unclear whether the fine tuning as specified in this work improves the performance in general rather than on the test set of PMechDB.
- Error bars are not provided.
- Not benchmarked against a comparable method, referenced in [18].
Technical Quality: 2
Clarity: 1
Questions for Authors: What would the performance of the models tuned on this dataset be on general reaction prediction benchmarks? Would it improve the performance as compared to models without interpretability steps or not?
Confidence: 4
Soundness: 2
Presentation: 1
Contribution: 1
Limitations: The authors address some of the limitations of their work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Reviewer Comment:
The paper has substantial clarity problems:
Table captions are insufficiently informative, requires going deeper into the text to understand what results are actually presented (e.g. 'Table 3: Top-N Accuracy of Trained Models')
Figures 5 and 6 are formatted inconsistently with the rest of the file.
Response:
Table captions have been revised to better clarify the results being presented. Figures 5 and 6 have been updated to make them more consistent in formatting. Inside the attached pdf, we have provided Figure 3, this contains the revised versions of Figures 5 and 6.
Reviewer Comment:
Citation quality is poor:
Could provide more references to prior work overall. e.g. section 3.3 describes prior work on sequence to sequence modeling without any references.
Response:
References have been added to section 3.3.
Reviewer Comment:
Novelty is not prominent:
Method in [18] (OrbChain) is already working with similar task on a similar dataset.
Response:
For novel ML architectures, we present the two-step transformer. However, the major contribution of the manuscript is to address the prediction of polar reactions in an explainable way. Polar reactions are an extremely complex, commonly encountered, and fundamental group of chemical reactions in organic chemistry. Predicting polar reactions is one of the most difficult problems in AI applied to science, and understanding them is necessary for designing synthetic pathways. We train models on a newly introduced dataset and are able to provide interpretable predictions of polar mechanisms with high accuracy.
Reviewer Comment:
Evaluation is insufficient:
Source code for reproduction has not been provided.
The resulting models have not been evaluated on the global datasets, making it unclear whether the fine tuning as specified in this work improves the performance in general rather than on the test set of PMechDB.
Error bars are not provided.
Response:
The source code for the two-step models cannot be provided as it uses licensed OpenEye Scientific Software to do the chemoinformatics processing.
Evaluating the models on the global datasets would be testing them on a task which they are not trained to do. The models are designed to predict elementary steps, not overall transformations like those contained in standard benchmarking datasets.
The standard metric for evaluation on reaction prediction models is top-n accuracy. They were trained once on the training dataset, and then evaluated on the testing dataset, there are no error bars to report.
Reviewer Comment:
Not benchmarked against a comparable method, referenced in [18].
Response:
Due to limitations in space, we cannot compare to all existing methods. We made a selection of a representative subset of relevant methods. We do not claim to compare all existing methods.
Reviewer Comment:
What would the performance of the models tuned on this dataset be on general reaction prediction benchmarks? Would it improve the performance as compared to models without interpretability steps or not?
Response:
This is a good point, but it is a complex question because of several reasons. Firstly, it is difficult to choose a benchmarking dataset. The most popular benchmarking datasets involve overall transformations, which are a series of elementary steps chained together.
Secondly, our method is computationally expensive. In order to predict an overall transformation, a series of elementary step predictions must be chained together. This means a branching tree search must be performed, and as the depth of the tree grows, the runtime increases exponentially. It will be difficult to assess the performance on overall transformations without knowing the depth (number of steps) in the pathway so we can make a reasonable stopping point. We have provided several example pathways that we have recovered in the attached pdf as Figure 2. We are actively collecting a dataset of pathways extracted from organic chemistry textbooks. In preliminary experiments, we recover the target products in 65% of reactions.
We cannot say what is the performance on general reaction prediction benchmarks, as we did not assess on overall transformation datasets. | Summary: Current reaction prediction models lack interpretability for chemical reaction prediction. This paper evaluates the various machine learning models on the PMechDB dataset which contains polar elementary steps. Besides, this paper proposes a new system: PMechRP, which achieves the highest top-5 accuracy.
Strengths: Strengths:
1. A new benchmark has been introduced, which improves the interpretability and causality of a chemical reaction.
2. Several methods are evaluated.
Weaknesses: Weaknesses:
1. This paper seems like a technique report.
2. The main conference track is not suitable for this paper. I think the dataset & benchmark track is more suitable.
3. Writing is poor.
Technical Quality: 3
Clarity: 3
Questions for Authors: N/A
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 1
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Reviewer Comment:
This paper seems like a technique report. The main conference track is not suitable for this paper. I think the dataset & benchmark track is more suitable. Writing is poor.
Response:
We address a very important problem in Chemistry, the prediction of polar reactions. Polar reactions are an extremely diverse, complex, and fundamental subset of chemical reactions which are widely observed in important synthetic pathways. Predicting polar reactions is one of the most difficult problems in AI applied to science. At the moment, there is no AI that predicts chemical reactions at the level of an expert. The major contribution of the manuscript is to address the prediction of polar reactions in an explainable way. We train models on a newly introduced dataset and are able to provide interpretable predictions of polar mechanisms with high accuracy. | Summary: Previous reaction prediction models formulate the forward chemical reactions in an end-to-end manner, which only considers the input state and output state while ignoring intermediate states describing the electron redistribution changes. This work tries different models on a new benchmark dataset PMechDB. Experimental results demonstrate the effectiveness of the transition state information in the new benchmark dataset.
Strengths: The motivation is quite clear. This reviewer agrees with the importance of the exploration of intermediate electron transfer. This is particularly important for the chemical reaction simulation, benefitting the understanding of reaction mechanisms.
Weaknesses: (1) The technical contribution of this work is very limited. This reviewer does not see enough improvements from the algorithm side. Also, it seems the dataset is not proposed by this work. The contribution of this work is overall limited.
(2) If this work intends to propose a new benchmark, then much more comprehensive reaction models should be covered. Currently, two important reaction models are not discussed: "non-autoregressive electron redistribution modeling for reaction predictions" and "A Generative Model For Electron Paths." In addition, the evaluation metric and the new task are not clearly described. More detailed descriptions should be provided for clarity.
(3) The presentation of this work is not very clear. This reviewer does not fully understand how the multi-step information helps the reaction modeling. A good example illustrating the significance of the intermediate step information is required. At this stage, this reviewer thinks the multi-step transition information can be easily captured by recursive modeling of single-step reaction models. Currently, this reviewer does not see what new challenges are brought by the intermediate step.
(4) This work is very similar to the published paper "AI for Interpretable Chemistry: Predicting Radical Mechanistic Pathways via Contrastive Learning." This reviewer does not see many differences between the submitted work and this prior work.
Technical Quality: 2
Clarity: 2
Questions for Authors: Is this paper published in NeuriPS 2023 as "AI for Interpretable Chemistry: Predicting Radical Mechanistic Pathways via Contrastive Learning"? It seems the content is very similar. If not, what are the differences between this work and the previous work?
Confidence: 5
Soundness: 2
Presentation: 2
Contribution: 1
Limitations: Constructing the benchmark dataset with ground-truth multi-step electron transition states is very hard. This may hinder the further development of this direction.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Reviewer Comment:
The technical contribution of this work is very limited. This reviewer does not see enough improvements from the algorithm side. Also, it seems the dataset is not proposed by this work. The contribution of this work is overall limited.
Response:
For novel ML architectures, we present the two-step transformer. However, the major contribution of the manuscript is to address the prediction of polar reactions in an explainable way. Polar reactions are an extremely complex, commonly encountered, and fundamental group of chemical reactions in organic chemistry. Predicting polar reactions is one of the most difficult problems in AI applied to science, and understanding them is necessary for designing synthetic pathways. We train models on a newly introduced dataset and are able to provide interpretable predictions of polar mechanisms with high accuracy.
Reviewer Comment:
If this work intends to propose a new benchmark, then much more comprehensive reaction models should be covered. Currently, two important reaction models are not discussed: "non-autoregressive electron redistribution modeling for reaction predictions" and "A Generative Model For Electron Paths." In addition, the evaluation metric and the new task are not clearly described. More detailed descriptions should be provided for clarity.
Response:
The number of models of comparison we can do was limited by the size limits of the neurips conference. However, the reviewer is correct that these models are interesting. We believe their use of modeling electron flows makes them relevant to this project. The NERF model from the "non-autoregressive electron redistribution modeling for reaction predictions" paper will be included in the updated manuscript.
Reviewer Comment:
The presentation of this work is not very clear. This reviewer does not fully understand how the multi-step information helps the reaction modeling. A good example illustrating the significance of the intermediate step information is required.
Response:
An example illustrating the importance of intermediate steps will be added to the manuscript. We have included a figure demonstrating this in the attached pdf titled Figure 1. This figure illustrates the creation of an unwanted side product in an intermediate step in the synthetic pathway for the drug Deucravacitinib. This led to a decrease in the overall purity of the products. By modeling the overall transformation as a series of elementary steps, competing pathways can be identified and the synthetic pathway can be optimized to reduce unwanted side products and increase purity and yield.
Reviewer Comment:
At this stage, this reviewer thinks the multi-step transition information can be easily captured by recursive modeling of single-step reaction models. Currently, this reviewer does not see what new challenges are brought by the intermediate step.
Response:
We are unsure what the author is asking.
If the “single-step” reaction models refer to the mechanistic models:
This is exactly what we are doing. The mechanistic models are designed so that they can be recursively applied to generate a series of elementary steps which can describe an overall transformation.
If the “single-step” reaction models refer to overall transformation reaction models:
The task of predicting elementary steps is inherently different from the task of predicting overall transformations. To make a simple analogy, if we consider the reactants to be a set of cooking ingredients. The overall transformation model predicts the food we are cooking, while the mechanistic model predicts the product of each step of the cooking recipe. Chaining the overall transformation model will not provide any pathway because it does not predict intermediate steps, it only predicts the final product.
Reviewer Comment:
Is this paper published in NeuriPS 2023 as "AI for Interpretable Chemistry: Predicting Radical Mechanistic Pathways via Contrastive Learning"? It seems the content is very similar. If not, what are the differences between this work and the previous work?
Response:
The main contribution of this work is to present a predictor for elementary polar reaction steps. The AI for Interpretable Chemistry paper addresses prediction on radical reaction steps. Polar reactions are a much more diverse group of reactions than radical reactions. We train on a larger training set, containing a different type of reaction, which has mechanisms that are considerably more diverse and complex. | Summary: the paper describes a new approach to predict polar reaction mechanisms, which is the most important class of chemical reaction mechanisms. this can be quite useful for chemical reaction prediction.
this reviewers rating is based on the current presentation of the manuscript, if the authors are willing to enhance the clarity of the manuscript, this reviewer is willing to increase their score.
Strengths: - Addressing an underexplored but important problem
- decent results
- interesting results with pre-trained methods, that in some cases are surprising (T5 seem to work not as well despite multi-task pretraining)
Weaknesses: - Model and data processing descriptions are quite short and should be expanded, and presented coherently in one location in the manuscript. From the description in the manuscript I would likely not be able to re-implement the method
- It is not immediately clear which ensemble is shown in table 4
- maybe not so much innovation from the ML side?
Technical Quality: 3
Clarity: 2
Questions for Authors: 1. What could be the reason the t5 model is not working so well, even though it's pre-trained on multiple tasks?
2. Why did the authors not consider employing more advanced GNN models or the models from Kayala et al?
Tiny details:
line 105. "We utilize the innovative text-based reaction predictor, Molecular Transformer" - the MT is now a 5 year old model, maybe not call it innovative anymore?
related prior work by Segler https://doi.org/10.1002/chem.201605499 and in particular by Bradshaw https://openreview.net/forum?id=r1x4BnCqKX should be cited
Confidence: 5
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: ok
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Reviewer Comment: Model and data processing descriptions are quite short and should be expanded, and presented coherently in one location in the manuscript. From the description in the manuscript I would likely not be able to re-implement the method
Response: The transformer models are publicly available. As for the two-step models, they are described in detail in the Kayala et al paper. We will add an additional section in the appendix to further clarify the basis for the two-step methods.
Reviewer Comment: It is not immediately clear which ensemble is shown in table 4
Response: In table 4, all the ensembles are for the two-step transformer architecture. The figure title has been updated to clarify this.
Reviewer Comment: maybe not so much innovation from the ML side?
Response: For novel ML architectures, we present the two-step transformer. However, the major contribution of the manuscript is to address the prediction of polar reactions in an explainable way. Polar reactions are an extremely complex, commonly encountered, and fundamental group of chemical reactions in organic chemistry. Predicting polar reactions is one of the most difficult problems in AI applied to science, and understanding them is necessary for designing synthetic pathways. We train models on a newly introduced dataset and are able to provide interpretable predictions of polar mechanisms with high accuracy.
Reviewer Comment: What could be the reason the t5 model is not working so well, even though it's pre-trained on multiple tasks?
Response: One possible reason T5Chem is outperformed by Chemformer on the forward prediction task could be the pretraining. The Chemformer model is pre-trained on roughly 4.5 times more forward reactions than the T5Chem model.
However, it is worth noting that the performance of T5Chem is consistent with the original T5Chem paper. In the T5Chem paper, they compare the model to Molecular Transformer, where it offers an improvement of 2% to top-5 accuracy. In our paper, we observe a similar improvement of 4% to top-5 accuracy between T5Chem and Molecular Transformer. They do not compare their model to Chemformer in their paper.
Reviewer Comment: Why did the authors not consider employing more advanced GNN models or the models from Kayala et al?
Response: The models from Kayala et al were implemented and used for the two-step prediction as well as the two-step transformer.
We have applied more advanced GNNs to the data at the time of submission, but the results were incomplete. We will include GNN results in the updated version of the manuscript.
Reviewer Comment: Tiny details: line 105. "We utilize the innovative text-based reaction predictor, Molecular Transformer" - the MT is now a 5 year old model, maybe not call it innovative anymore?
related prior work by Segler https://doi.org/10.1002/chem.201605499 and in particular by Bradshaw https://openreview.net/forum?id=r1x4BnCqKX should be cited
Response: Both of these points have been fixed in the manuscript.
---
Rebuttal Comment 1.1:
Comment: Thank you for your reply and effort to address the points raised, I will adjust my score.
I’d still recommend to consider improving the clarity of the writing, as also other reviewers have pointed out. The manuscript addresses an important and timely topic, which unfortunately doesn’t come across as clear as it should be. | Rebuttal 1:
Rebuttal: We have carefully considered all comments given by the reviewers, and wish to thank them for helping us enhance the quality of the manuscript. In response to the reviewers’ feedback, we have prepared an additional pdf with figures to address specific comments.
Pdf: /pdf/4cdcae9663eb69eb63e843d16082cdde6f1141a4.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
COVE: Unleashing the Diffusion Feature Correspondence for Consistent Video Editing | Accept (poster) | Summary: The paper introduces COrrespondence-guided Video Editing (COVE), a method to improve video editing with pretrained text-to-image (T2I) diffusion models. It addresses the challenge of maintaining temporal consistency by using diffusion feature correspondence. COVE identifies and samples highly corresponding tokens across frames, applying self-attention to them during editing. It also reduces GPU memory usage with a temporal-dimensional token merging strategy. COVE integrates seamlessly into existing T2I models without extra training and shows superior performance in various scenarios.
Strengths: 1. The method is reasonable and easy to follow.
2. The presentation is clear and easy to understand.
3. The Sliding-window Strategy seems useful in video editing methods.
Weaknesses: 1. Experimental performance improvement seems limited. Most of the editing results present a similar type involving style/color change, which is easy to implement in the previous methods from my experience. According to Fig. 6, the performance compared to earlier methods does not show high superiority, which is not convincing to support "...COVE achieves the start-of-the-art performance...".
2. I noticed that [1] also uses DIFT features to guide video editing, especially object replacement. Could you please compare COVE with it? For example, replace the "jeep" with a "sports car" or "bus" in Fig. 11. I would like to see the performance when there is a large motion/shape change.
[1] VideoSwap: Customized Video Subject Swapping with Interactive Semantic Point Correspondence
Technical Quality: 3
Clarity: 3
Questions for Authors: How do you make sure that $k=3$ is suitable for all videos? As suggested in Fig. 2, the subject may suffer from severe morph across video frames when there is a significant motion change.
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: See weakness above.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer nziD,
Thank you for your time and thoughtful feedback on COVE. We are pleased that you recognize the effectiveness of the sliding window strategy. We provide our feedback as follows.
# Experimental performance
> Experimental performance improvement seems limited. Most of the editing results present a similar type involving style/color change, which is easy to implement in the previous methods from my experience. According to Fig. 6, the performance compared to earlier methods does not show high superiority, which is not convincing to support "...COVE achieves the start-of-the-art performance...".
The baseline methods mentioned in our paper are among the ***most representative and effective*** approaches in recent years, having received widespread recognition and attention. In the experiment, we present ***both quantitative and qualitative results*** to illustrate COVE's superior performance. As shown in Table 1, COVE outperforms these baseline methods on several widely used quantitative metrics. The result of user study also illustrates that edited video quality aligns with human subjective perception. For qualitative comparison, COVE successfully alleviates the problems of baselines such as blurring, temporal inconsistency, inconsistency with the prompt, etc (Figure 6). The ablation study (Table 2 and Figure 7) also illustrates that COVE can significantly enhance the temporal consistency of edited videos. We also ***provide more qualitative results*** (Figure 15 in the uploaded PDF), illustrating the ability of COVE for shape editing.
# Comparasion with Baselines
> I noticed that [1] also uses DIFT features to guide video editing, especially object replacement. Could you please compare COVE with it? For example, replace the "jeep" with a "sports car" or "bus" in Fig. 11. I would like to see the performance when there is a large motion/shape change.
VideoSwap proposes to leverage the semantic point correspondence for aligning motion trajectories for high-quality video editing. However, it requires users to manually provide keypoints as the additional condition for each video, which is a relatively troublesome burden for users. Furthermore, the method needs extra training, increasing the cost of video editing.
**In contrast**, COVE is a training-free framework that only requires users to provide the text prompt. Unfortunately, *the code of VideoSwap is not publicly available*, we have filled in the form provided by the authors to inquire about the source code. We are willing to compare our method with it after obtaining its source code.
We also provide more experiment results as you suggested, such as changing the jeep into sports car and bus. The results are shown in Figure 15 in the uploaded PDF, illustrating the satisfying performance of our method for shape editing. We are willing to add the results to the final revision of our paper.
# The value of $K$
> How do you make sure that K=3 is suitable for all videos? As suggested in Fig. 2, the subject may suffer from severe morph across video frames when there is a significant motion change.
As illustrated in Table 2, for the videos in our experiments, $K=3$ and $K=5$ achieve similar performance. As a result, we choose $K=3$ for a better trade-off between quality and resource consumed. What's more, $K$ is a hyperparameter in our method, which can be adjusted for each video to achieve high-quality editing.
---
Rebuttal 2:
Comment: Dear reviewer nziD,
Thanks for your valuable suggestions on our paper!
We have responsed to each of your concerns in our rebuttal, including further explanation on the performance of our method, more
qualitative results for shape editing (Figure 15 in the uploaded PDF) and the chosen value $K$.
If you have any unsolved issues, please let us know. We are more than happy to answer any further questions you may have!
Authors | Summary: In this paper, the authors tackle the problem of video editing using Text-to-Image diffusion models. To achieve this, the authors make use of strong diffusion model’s feature correspondence abilities. The authors propose a sliding-window based strategy to track features of source video based on correspondences. Here, the goal is to overcome the computational complexity involved with finding similarity across all patches of the video. This is achieved by computing similarity scores within a sliding window. After this, a set of corresponding tokens are collected and merged using an existing work (Token Merging paper, ICLR 2023). Experimental analysis shows that the proposed method achieves state-of-the-art performance across many tasks.
Strengths: S1. Model does not require any training or even inference time optimization. The method computes correspondences in an efficient manner and track it effectively.
S2. The ideas proposed in the paper are simple and well presented overall.
S3. Results look impressive - the method is able to achieve state-of-the-art in comparison to the works in the area of video editing with T2I models.
Weaknesses: W1. The paper clearly states that the base model is Stable diffusion on L208 and the model does not require any training including inflation (which I think is a major strength). Stable diffusion model has only self-attention and cross-attention layers. It is not clear how “temporal layers” come into picture on L200. Further, it is not clear what is the 3D-Unet in context of Fig. 3.
W2. The proposed method tends to alter the regions of the image that do not correspond to the text prompt. In Fig. 1, the results shows that the proposed alters the background significantly.
W3. Results show that the proposed method is unable to alter the shape of the object, since the correspondence merging limits the shape changing ability drastically.
W4. The core novelty of exploiting the sliding window to use a lower field of comparison (as. In Eq. 10) is somewhat straightforward and limited given the obvious computational constraints. However, in conjunction with other ideas such as Token Merging [ref 1] and correspondences [ref 2], the work is novel.
W5. There have been works in the video editing space which exploit correspondences for video editing [ref 3,4]. It will be helpful to differentiate the proposed method with them. However, I would like to acknowledge that these works could have been parallel to this submission.
References
[ref 1] Token Merging: Your ViT But Faster, ICLR 2023
[ref 2] Emergent Correspondence from Image Diffusion, NeurIPS 2023
[ref 3] Space-Time Diffusion Features for Zero-Shot Text-Driven Motion Transfer, CVPR
[ref 4] GenVideo: One-shot target-image and shape aware video editing using T2I diffusion models, CVPRw
Technical Quality: 3
Clarity: 3
Questions for Authors: - How does applying \tilde{Corr} help during inversion?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The paper addresses the limitations of the work in Appendix E.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear reviewer Ht4d,
Thanks for your comprehensive review and insightful comments on our paper. We appreciate that you recognize the advantages of training-free and impressive results of our method. The response to your concerns is shown below.
# Temporal layers and 3D Unet
> The paper clearly states that the base model is Stable diffusion $\cdots$. Further, it is not clear what is the 3D-Unet in context of Fig. 3.
The temporal layer is widely applied in video diffusion models, which mainly consist of spatio-temporal attention [1]. In spatio-temporal attention, considering a query Q in one frame, the key K and value V are from both the current frame and other frames. This operation aims to capture spatio-temporal consistency of the video.
To convert a 2D Unet into a 3D Unet, current methods [1] usually inflate the 2D convolution layers to pseudo-3D convolution layers, with 3x3 kernels being replaced by 1x3x3 kernels. And the self-attention in 2D Unet is replaced by the aforementioned spatio-temporal attention. We will add a detailed introduction to 3D Unet in the appendix of our paper.
# Editing unrelated regions
> The proposed method tends to alter $\cdots$ alters the background significantly.
We would like to point out that maintaining the background unchanged is not the primary focus of our work. In fact, most current SOTA video editing methods (such as [1,2,3,4,5,6]) which ***take texts as the only condition*** also cannot maintain the background. To solve this problem, additional conditions such as a mask for regional editing need to be added, which we believe is an interesting topic for future work.
On the other hand, our paper mainly focuses on maintaining temporal consistency and enhancing the quality of generated videos. The experimental results in the paper illustrate that we have effectively achieved this goal. In addition, we also notice that in some cases COVE can maintain the background unchanged (such as Figure 16 in uploaded PDF).
# Shape editing
> Results show that the proposed method is unable to alter the shape of the object, since the correspondence merging limits the shape changing ability drastically.
COVE also illustrates the satisfying capability of making small modifications to the shape of the object such as changing a jeep into a sport car or a bus (Figure 15 in the uploaded PDF). We believe this is due to the correspondence can accurately reflect the motion information in the source video. This motion information remains unchanged between the source video and edited video even though the users want to make modifications to the object's shape. As a result, COVE not only significantly alleviates the problem of temporal inconsistency but also shows satisfying performance for altering the shape of objects.
# Novelty
> The core novelty of exploiting the sliding window to use $\cdots$ the work is novel.
Thanks for acknowledging the novelty of our work! As stated in the paper, our proposed sliding-window method is based on the continuity of videos, which is intuitive and effective. Significantly reducing the computational complexity while achieving outstanding performance, we believe that our work is insightful for the community of video editing.
# Comparasion with baselines
> There have been works in the video $\cdots$ could have been parallel to this submission.
Thanks for your advice. [6] proposes using a pretrained T2V model for motion transferring. [7] proposes target and shape-aware InvEdit masks for shape editing. However, they need extra training [7] or optimization [6]. **In contrast**, COVE is a training-free framework that can leverage popular pre-trained T2I model (such as Stable Diffusion) to achieve impressive performance. We compare our method with [6] (Figure 15 and Table 9). *The source code of [7] is not released*, we are willing to compare our method with it after the source code is released.
**Table 9.** Quantitative comparison between DMT [6] and our method. We use them to respectively edit 10 20-frame videos which are widely used in the field of video editing and report the quantitative results.
|Method|SC|MS|AQ|IQ|
|-|-|-|-|-|
|DMT [6]|0.9573 |0.9874 | 0.7003|0.7396 |
|**Ours**|**0.9689**|**0.9887**|**0.7135**|**0.7447**|
# Effectiveness of correspondence guided attention during inversion
> How does applying $\tilde{Corr}$ help during inversion?
The quality of noise obtained by inversion can significantly affect the final quality of editing [8, 9]. The **C**orrespondence-**G**uided **A**ttention (CGA) during inversion can increase the quality and temporal consistency of the obtained noise ($z^T$ in Figure 3), which can further help to enhance the quality and consistency of edited videos. We add the ablation study of it (Figure 16 and Table 10), which will also be added to our main paper.
**Table 10.** Ablation Study of **C**orrespondence-**G**uided **A**ttention (i.e., applying $\tilde{Corr}$) during inversion.
||SC|MS|AQ|IQ|
|-|-|-|-|-|
|No CGA during inversion|0.9413|0.9740|0.7098|0.7422|
|**CGA in both inversion and denoising**|**0.9689**|**0.9887**|**0.7135**|**0.7447**|
# References
[1] Tune-A-Video: One-Shot Tuning of Image Diffusion Models for Text-to-Video Generation, ICCV2023
[2] FLATTEN: optical FLow-guided ATTENtion for consistent text-to-video editing, ICLR 2024
[3] RAVE: Randomized Noise Shuffling for Fast and Consistent Video Editing with Diffusion Models, CVPR 2024
[4] FRESCO: Spatial-Temporal Correspondence for Zero-Shot Video Translation, CVPR 2024
[5] Codef: Content deformation fields for temporally consistent video processing, CVPR 2024
[6] Space-Time Diffusion Features for Zero-Shot Text-Driven Motion Transfer, CVPR 2024
[7] GenVideo: One-shot target-image and shape aware video editing using T2I diffusion models, CVPRw
[8] Generative Rendering: Controllable 4D-Guided Video Generation with 2D Diffusion Models, CVPR 2024
[9] Pix2video: Video editing using image diffusion, ICCV 2023
---
Rebuttal Comment 1.1:
Comment: The rebuttal addresses the concerns. I highly encourage authors to include this discussion in the paper. Hoping that authors will include these discussions in the paper, I raise my score to an "accept". | Summary: This paper focused on improving the temporal consistency of video editing. They propose to leverage the inherent diffusion feature correspondence with a sliding-window based strategy. With this design, the tokens in noisy latents can be sampled based on the “one-to-many” correspondence. The experiments demonstrate superior performance than other methods.
Strengths: + The paper proposed a new diffusion feature correspondences guided video editing method. In addition, they introduce the token merging and sliding window for higher efficiency.
+ The method demonstrates superior performance than previous methods with extensive ablations to evaluate the effectiveness.
+ The paper is well-written with clear motivations.
Weaknesses: - The proposed method highly relies on the correspondences of diffusion features. However, such correspondences may be difficult to obtain for videos with large content motions. The reviewer suggests a more detailed discussion about how to ensure the accuracy of the correspondences, as well as what are the potential limitations of inaccurate correspondences. In addition, it would be better to quantitatively evaluate the correctness of the correspondences of diffusion features.
- The paper mainly discussed and compared their method with optical flow-guided video editing methods. However, such correspondence-based idea is also related to deformation field based methods such as CoDeF [1] or neural atlas based methods [2], where they can ensure the accuracy of pixel / point level accuracy by evaluating the video reconstruction accuracy. The authors are suggested to compare or discuss their method with such approaches.
- The related works seem to be a bit short and less extensive. There have been many video editing works that target at improving temporal / subject consistency, efficiency, long video editing, etc. The reviewer suggests to include a more extensive related works and discuss the differences of the proposed work with them.
[1] Ouyang, Hao, et al. "Codef: Content deformation fields for temporally consistent video processing." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024.
[2] Chai, Wenhao, et al. "Stablevideo: Text-driven consistency-aware diffusion video editing." Proceedings of the IEEE/CVF International Conference on Computer Vision. 2023.
Technical Quality: 3
Clarity: 3
Questions for Authors: o The reviewer appreciates the qualitative examples of correspondences of diffusion features in appendix. Is there any quantitative metric to evaluate the correspondences of diffusion features?
o How to define the temporal length for token merging? Does it remain the same for all experiment videos?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer LETZ,
Thank you for your comprehensive and detailed review of our paper and the recognition of our work's clarity and effectiveness. We provide our feedback as follows.
# Accuracy of correspondences
> The proposed method highly relies on the correspondences of diffusion features $\cdots$. In addition, it would be better to quantitatively evaluate the correctness of the correspondences of diffusion features.
As illustrated in Figure 10 in appendix and Figure 14 in the uploaded PDF, the correspondence acquired through the diffusion feature is accurate and robust. As there is no existing video dataset with the annotated keypoints on each frame, to further evaluate its accuracy quantitatively, we collect 5 videos with 30 frames and 5 videos with 60 frames and manually label some keypoints on each frame. Then we report the percentage of correct keypoints (PCK) following the prior work [1,2] (Table 7).
Specifically, for each video, given the first frame with the keypoints, we obtain the predicted corresponding keypoints on other frames through the diffusion feature. Then we evaluate the distance between the predicted points and the ground truth. The predicted point is considered to be correct if it lies in a small neighborhood of the ground truth. Finally, the total number of correctly predicted points divided by the total number of predicted points is the value of PCK. The result in Table 7 illustrates that the diffusion feature can accurately find the correct position in most cases for video editing.
**Table 7.** Quantitative comparison of the accuracy between diffusion feature correspondence (DFC) and optical flow correspondence (OFC).
|Method|PCK|
|-|-|
|OFC|0.87|
|**DFC**|**0.92**|
The potential limitation of inaccurate correspondences is that suboptimal results could be obtained. For instance, as discussed in Section 4.4, using a too-small window size leads to inaccurate correspondences, resulting in poor editing outcomes. By adjusting the window size, this problem can be effectively solved.
# Comparasion with baselines
> The paper mainly discussed and compared their method with optical flow-guided video editing methods $\cdots$ . The authors are suggested to compare or discuss their method with such approaches.
Thanks for your advice. The deformation field-based methods [3] factorize the video into a 2D content canonical field and a 3D temporal deformation field. For each video, it requires complicated training of implicit deformable models to obtain a 2D and a 3D hash table. What's more, it still needs the optical flow model for preprocessing the video sequences.
Neural atlas-based methods [4] leverage the layered representations to propagate the appearance information among frames. However, two diffusion models are employed to respectively process the background and foreground, which increases resource consumption. Furthermore, they also need extra training.
**In contrast**, COVE is a training-free method that effectively obtains highly accurate correspondence simply by calculating the similarity between features. We also compare their performance (Table 8 and Figure 15 in the uploaded PDF). We are willing to add the results to the final revision of the paper.
**Table 8.** Quantitative comparison among CoDeF [3], StableVideo [4] and COVE. We use them to respectively edit 10 20-frame videos which are widely used in the field of video editing and report the quantitative results.
|Method|SC|MS|AQ|IQ|
|-|-|-|-|-|
|CoDeF [3]|0.9369|0.9619|0.6847|0.7134|
|StableVideo [4]|0.9215|0.9771|0.6781|0.7273|
|**Ours**|**0.9689**|**0.9887**|**0.7135**|**0.7447**|
# Add more related works
> The related works seem to be a bit short and less extensive $\cdots$. The reviewer suggests to include more extensive related works and discuss the differences of the proposed work with them.
Thanks for your advice, we will discuss more related works in our paper, including but not limited to the CoDeF [3] and StableVideo [4].
# Temporal length for token merging
> How to define the temporal length for token merging? Does it remain the same for all experiment videos?
In our experiment, the temporal length for token merging is simply set to the length of the video.
# References
[1]. Cats: Cost aggregation transformers for visual correspondence, NeurIPS 2021
[2]. Transformatcher: Match-to-match attention for semantic correspondence, CVPR 2022
[3]. Codef: Content deformation fields for temporally consistent video processing, CVPR 2024.
[4]. Stablevideo: Text-driven consistency-aware diffusion video editing, ICCV 2023.
---
Rebuttal 2:
Title: Response by Reviewer LETZ
Comment: Dear Authors,
Thanks for your hard work and detailed analysis. The rebuttal has resolved all my concerns. The quantitative experiments of diffusion feature correspondences on rebuttal is helpful to demonstrate the effectiveness of leveraging diffusion features for video editing. The authors are suggested to include these results in the paper. Therefore, I decide to raise my score from 5 to 6.
Best regards,
Reviewer LETZ | Summary: The paper proposes using the correspondence features already exist in diffusion models to find matched tokens between different frames in a video for consistent video editing. The motivation is that video editing models using optical flow to find matched features can exhibit one-to-many matching issue which limits the quality of video editing consistency when there are large motions. To overcome expensive compute, the paper proposes using "sliding window" (w.r.t frames) by only finding correspondences in subsequent frames. Reasonable qualitative and quantitative results are shown in the experiments section.
Strengths: - The proposed method intuitively make sense
- The flow of the paper is relatively easy to follow
- The results shown are reasonable
Weaknesses: - Optical flow is also essentially just finding matched patches/features between different frames. The proposed way of finding one-to-many correspondences between frames can also be applied uusing optical flow (with some small modifications of existing optical flow algorithms). I do not see any evidence from the paper that using the features in a diffusion model is better than using the features extracted by an optical flow algorithm (explicitly or inexplicitly depending on different optical flow methods)
- the above should be an important ablation study that is currently missing from the paper
- the sliding window algorithm, though simple and achieving compute reduction, can also lead to errors accumulating across frames when editing long videos
- the novelty is a bit limited. My argument is the following:
1. Features inexplicitly extracted in Stable Diffusion can be used for feature correspondence is well-known to the community, e.g. in Tang et al. NeurIPS 2023
2. Finding correspondences for video editing is widely studied and shown to be effective, including but not limited to the several references this paper already cited
3. The main contribution then seems to be proposing using SD's features to find correspondences instead of using optical flow. However, the one-to-many problem and many-to-one problem is essentially a thresholding problem when finding correspondences and the receptive field size used to find matched patches, and I cannot see why the features extracted by SD is superior than the features extracted by a state-of-the-art optical flow model. This ablation study is missing
Technical Quality: 2
Clarity: 2
Questions for Authors: See weakness
Confidence: 4
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: No
Flag For Ethics Review: ['Ethics review needed: Research involving human subjects']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer sLJK,
Thanks for your time and thoughtful review. We appreciate your recognition of the satisfying experimental results and clear writing of the paper. We provide our feedback as follows.
# Optical flow model for one-to-many correspondences
> Optical flow is $\cdots$ by an optical flow algorithm.
Thanks for your valuable suggestions. Modifying the optical flow model [1,2] from one-to-one to one-to-many correspondence may seem simple and intuitive. Unfortunately, we would like to point out that it is still nontrivial in practice due to the inherent characteristics of optical flow models.
- For the $i$th frame $I_i$ (with the shape of [H,W]) in a video, the optical flow model ***directly predicts*** an optical field $F$ with the shape of [H,W,2], representing the offset of each pixel along $x$ and $y$ axis in $I_{i+1}$. The prediction of $F$ is a ***regression problem*** during training, where L1 loss is applied between $F$ and the ground truth. In other words, for each pixel, optical flow models ***directly predict*** the position of the ***single*** corresponding pixel in the next frame, rather than firstly calculate confidence for each pixel in the next frame and then select the pixel with the highest confidence.
- Based on the analysis above, the off-the-shelf optical flow model cannot obtain the one-to-many correspondence. Furthermore, there is also ***no existing dataset with the annotations of one-to-many correspondence*** to support the training of a model with modified structures from scratch . The construction of dataset and the design of new model structure are still unexplored by existing works, which is out of the scope of our paper.
As a result, current optical flow models fail to obtain one-to-many correspondence. Instead, we provide the ablation study between COVE with $K=1$ (i.e., one-to-one correspondence) and flow-based baseline (Figure 13 and Table 5). The results illustrate that even with one-to-one correspondence, COVE still achieves comparable or even better result compared with the flow-based baseline. Furthermore, one of the main advantages of COVE is that it can effectively obtain one-to-many correspondence. With the increase of $K$, the quality of edited videos is better, achieving SOTA performance. We will update the details in the paper.
**Table 5.** Quantitative comparison between OFC(optical-flow correspondence) and DFC(diffusion feature correspondence). The detailed experiment settings follow original paper.
|Method|SC|MS|IQ|AQ|
|-|-|-|-|-|
|OFC|0.9617|0.9622|0.7155|0.6544|
|**DFC ($K=1$)**|0.9637|0.9817|0.7148|0.6979|
|**DFC ($K=3$)**|**0.9731**|**0.9892**|**0.7441**|**0.7122**|
# Accuracy of the sliding window
> The sliding window $\cdots$ long videos.
Thanks for your advice. In our method, the position of the sliding window is dynamically adjusted rather than fixed in each frame (Figure 4), which can ensure the accuracy of correspondence in long video. We also add the experiment about the editing quality of 10 short videos (20 frames) and their longer counterpart (60 frames). The results (Table 6 and Figure 14) illustrate that COVE is also effective for longer videos. We also provide the quantitative results about accuracy of correspondence in Table 7 (in feedback for Reviewer LETZ).
**Table 6.** Quantitative comparison between the editing quality of 20 frames and 60 frames.
|Video length|SC|MS|IQ|AQ|
|-|-|-|-|-|
|20|0.9689|0.9887|0.7447|0.7135|
|60|0.9685|0.9882|0.7441|0.7140|
What's more, the window size is a hyper-parameter. We may enlarge the window size if there are extremely sharp and fast motions (although such videos are very rare in real world). Even with a larger window size, our method still greatly reduces complexity.
# Novelty
We believe COVE elegantly addresses several challenging limitations in previous SOTA methods, potentially offering valuable insights for further research.
> 1.Features inexplicitly $\cdots$ to the community.
Existing work has primarily focused on correspondence between two images. However, it remains to be explored whether this correspondence can maintain consistent stability and accuracy over multiple frames and longer time scales in videos. Furthermore, whether the correspondence can aid in video editing tasks also lacks exploration. COVE elegantly proposes using correspondence to guide attention during inversion and denoising procedures. Achieving outstanding results (Figure 15 in the uploaded PDF), our work illustrates that ***using diffusion features to enhance quality and consistency*** is an insightful topic in the field of video editing.
> 2.Finding correspondences $\cdots$ be effective.
Although some previous methods also explore the correspondence for video editing, they heavily depend on the availability of a pretrained optical flow model with high accuracy, which is often not feasible in practice. Until now, acquiring correspondence information without optical flow models is still a challenging problem for video editing. Addressing this, COVE successfully illustrates that the ***optical flow model is not indispensable for finding correspondence in video editing***, and achieves ***even better*** results compared to flow-based methods.
> 3.The main contribution $\cdots$ is missing.
As stated above, in the optical flow model, the position of corresponding token is directly predicted through regression rather than firstly calculate the similarity in the feature and then select the token with highest similarity. As a result, the one-to-many and one-to-one problem ***is not a simple thresholding problem in optical flow models***. COVE elegantly addresses this limitation, obtaining one-to-many correspondence simply and effectively. So, we believe that our work is an insightful and interesting contribution to the community.
# References
[1] RAFT: Recurrent All-Pairs Field Transforms for Optical Flow, ECCV 2020 Best Paper
[2] GMFlow: Learning optical flow via global matching, CVPR 2022
---
Rebuttal 2:
Comment: Dear reviewer sLJK,
Thanks for your valuable advice on our paper.
We have responsed to each of your concerns in our rebuttal, including the comparasion between optical-flow correspondence and diffusion feature correspondence (Table 5, Table 7 and ***Figure 13 in the uploaded PDF***), accuracy of sliding window method on long videos (Table 6 and ***Figure 14 in the uploaded PDF***), and the novelty of our method. We also point out that using the optical flow model to obtain the one-to-many correspondence is nontrivial.
If you have any unsolved questions, please let us know. We are more than happy to answer any further questions you may have!
Authors | Rebuttal 1:
Rebuttal: # Overall reply for all reviewers
We thank all reviewers for their constructive comments. We have responded to each of the concerns below. In the following response, Figure 1-12 and Table 1-4 are in our main paper, ***Figure 13-16 are in the uploaded PDF of the rebuttal, and Table 5-10 are in our response***. For quantitative comparison of the video quality, we still report the four popular metrics used in the paper, i.e., Subject Consistency (SC), Motion Smoothness (MS), Aesthetic Quality (AQ), and Imaging Quality (IQ).
Pdf: /pdf/ff679105ce83ec341720b85dd99146fe210e9a2f.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
AsyncDiff: Parallelizing Diffusion Models by Asynchronous Denoising | Accept (poster) | Summary: This paper introduces AsyncDiff, an acceleration framework for diffusion models that transforms the traditional sequential denoising process into an asynchronous process. The key insight is that hidden state features in consecutive sampling steps exhibit high similarity. Therefore, feeding the output of the preceding component at time step t-1 as an approximation of the original input to each U-Net component has a negligible impact on performance. This approach allows the parallel denoising processes of AsyncDiff to operate fully asynchronously. Experiments on various versions of Stable Diffusion demonstrate that AsyncDiff significantly speeds up the denoising process for text-to-image generation. Additionally, AsyncDiff is applicable to video generation models such as AnimateDiff and SVD, further showcasing its versatility.
Strengths: 1. The writing is overall clear and well-structured. The paper effectively explains the limitations of previous methods that use patch parallelism and introduces a novel approach to "asynchronous" denoising to address these issues.
2. The paper provides thorough comparisons with baseline methods, such as Distrifusion, demonstrating clear improvements in both generation quality and efficiency.
3. The versatility of AsyncDiff is evidenced by experiments conducted on different versions of Stable Diffusion as well as video generation models.
4. The authors address concerns regarding the overhead of communication costs across multiple GPUs, showing that these costs are significantly lower than the model execution time.
Weaknesses: 1. Although the main contribution relies on the observation that the hidden states of consecutive steps are similar, the analysis of this phenomenon lacks details. Several key aspects need clearer explanation:
(a) Can the similarity of the hidden states be quantitatively measured? For instance, does a low MSE between hidden states indicate that the two states are “similar”?
(b) Is this phenomenon specific to the U-Net architecture, or is it agnostic to the backbone of the denoising model (e.g., Diffusion Transformer)?
(c) Is it only applicable to DDIM sampling? Does the phenomenon also hold for other fast ODE solvers, such as the DPM solver [1]?
2. While the experiments were conducted using 50 DDIM steps, a comparison with the original model using a number of DDIM steps that achieve a similar speedup would strengthen the argument for AsyncDiff. For instance, in a setup where AsyncDiff achieves a 2.7x speedup, comparing the FID score with the original model using 50 / 2.7 = 19 DDIM steps would clearly demonstrate the necessity of parallelizing the diffusion model.
[1] DPM-Solver: A Fast ODE Solver for Diffusion Probabilistic Model Sampling in Around 10 Steps, Lu et al., NeurIPS 2022
Technical Quality: 3
Clarity: 4
Questions for Authors: Most questions are included in the “Weaknesses” section.
Would a similar asynchronous denoising approach be applicable in a conditional generation setup as well? For instance, achieving faster inference for diverse conditional image generation models, such as ControlNet [1] and Zero-1-to-3 [2], could be practically useful.
[1] Adding Conditional Control to Text-to-Image Diffusion Models, Zhang et al., ICCV 2023
[2] Zero-1-to-3: Zero-shot One Image to 3D Object, Liu et al., ICCV 2023
Confidence: 4
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: The authors discuss the limitations of AsyncDiff (communication cost, dependency on the base model) and the potential societal impacts.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ## **Q1: Can the similarity of the hidden states be quantitatively measured? For instance, does a low MSE between hidden states indicate that the two states are “similar”?**
Thanks for the valuable comment. We provide quantitative analysis and visualization of the hidden state similarities in **Figure 2** and **Figure 3** of the **attached PDF FILE**.
**Figure 2** presents the cosine similarity and MSE of each block's hidden state between adjacent steps, providing a quantitative measure of similarity. The results show that hidden states are highly similar between most steps, except at the initial stage. This high similarity allows for using features from the previous step to parallelize calculations. **Figure 3** visually illustrates this phenomenon by displaying the hidden states of adjacent steps during the diffusion process. Both quantitative and qualitative analyses support our key insight. The initial instability in the curve explains why slightly increasing the warm-up steps can make the AsyncDiff output more consistent with the original output, as shown in Table 2 of our submission.
## **Q2: Is this phenomenon specific to the U-Net architecture, or is it agnostic to the backbone of the denoising model (Diffusion Transformer)?**
Thanks for the comment. As a universal acceleration method, AsyncDiff is agnostic to the backbone of the denoising model and is compatible with DiT-based diffusion models. As shown in Table 1, our method significantly speeds up the DiT-Based Stable-Diffusion-3-medium model while maintaining high-quality outputs. The qualitative results in **Figure 1 (b)** of the **attached PDF file** demonstrate that AsyncDiff preserves the excellent image quality and accurate text generation of SD 3.
### Table 1: Quantitative evaluations of AsyncDiff on DiT-Based SD 3
*Note: We compare the generative quality when using the DDIM sampler and AsyncDiff to achieve the same speedup. The SD 3 using a default 28-step DDIM sampler is the baseline.*
| **Method** | **Speed up ↑** | **Latency ↓** | **CLIP Score ↑** | **FID ↓** |
|----------------------|----------------|------------|------------------|-----------|
| SD 3 Original | 1.0x | 10.99s | 32.26 | 33.99 |
| SD 3 DDIM=15 steps | **1.8x** | 6.10s | 31.90 | 34.55 |
| **AsyncDiff (N=2 S=1)**| **1.8x** | **6.05s** | **32.14** | **32.28** |
## **Q3: Is it only applicable to DDIM sampling? Does the phenomenon also hold for other fast ODE solvers, such as the DPM solver?**
Thanks for the valuable feedback. AsyncDiff is a universal method that can be used with various samplers, including the DPM-Solver. Table 2 below provides the quantitative evaluation of AsyncDiff on the SD 2.1 with DPM-Solver sampler. With the same speedup ratio, AsyncDiff significantly improves generation quality compared to the baseline. We also provide the qualitative results in **Figure 1 (a)** of our **attached PDF File**. Our method achieves significant acceleration while maintaining higher consistency with the original output.
### Table 2: Quantitative evaluations of AsyncDiff using DPM-Solver sampler
*Note: We compared the generative quality when using the DPM-Solver sampler and AsyncDiff to achieve the same speedup. The SD 2.1 using a 25-step DPM-Solve sampler is the baseline.*
| **Method** | **Speed up ↑** | **MACs ↓** | **CLIP Score ↑** | **FID ↓** |
|----------|---------|------------|----------|-----------|
| SD 2.1 DPM-Solver 25steps | 1.0x | 76T | 31.57 | 28.37 |
| SD 2.1 DPM-Solver 15steps | 1.6x | 46T | 31.52 | 28.89 |
| **Our AsyncDiff (N=2 S=1)**| 1.6x | **38T** | **31.58** | **27.71** |
| SD 2.1 DPM-Solver 10steps | 2.2x | 30T | 31.29 | 29.28 |
| **Our AsyncDiff (N=3 S=1)**| 2.2x | **25T** | **31.36** | **28.20** |
## **Q4: A comparison with the original model using a number of DDIM steps that achieve a similar speedup would strengthen the argument for AsyncDiff.**
Thanks for the valuable suggestions. In Table 3, we present a more comprehensive quantitative evaluation of AsyncDiff. Our method achieves significantly better generation quality at similar speeds, and the advantage increases with higher speedup.
### Table 3: Additional Quantitative Evaluations of AsyncDiff
*Note: We compared the generative quality when using the DDIM sampler and AsyncDiff to achieve the same speedup. The default 50-step SD 2.1 is the baseline.*
| **Method** | **Speed up ↑** | **MACs ↓** | **CLIP Score ↑** | **FID ↓** |
|-----------|---------|----------|-----------|-----------|
| SD 2.1 Original | 1.0x | 76T | 31.60 | 27.89 |
| SD 2.1 DDIM 27steps | 1.8x | 41T | 31.53 | 28.43 |
| **Our AsyncDiff (N=2 S=1)**| 1.8x | **38T** | **31.59** | **27.79** |
| SD 2.1 DDIM 21steps | 2.3x | 32T | 31.46 | 29.09 |
| **Our AsyncDiff (N=3 S=1)**| 2.3x | **25T** | **31.56** | **28.00** |
| SD 2.1 DDIM 15steps | 3.0x | 23T | 31.26 | 30.12 |
| **Our AsyncDiff (N=2 S=2)**| 3.0x | **19T** | **31.43** | **28.55** |
| SD 2.1 DDIM 11steps | 4.0x | 17T | 30.99 | 32.25 |
| **Our AsyncDiff (N=3 S=2)**| 4.0x | **13T** | **31.22** | **29.41** |
## **Q5: Would a similar asynchronous denoising approach be applicable in a conditional generation setup as well?**
Thanks for the valuable comments. AsyncDiff is a universal method that performs well with conditional generation setups. In **Figure 1 (c)** of the **attached PDF file**, we show how AsyncDiff accelerates ControlNet+SDXL. Our method reduces latency from 19.90s to 14.30s using two A5000 GPUs while nearly mirroring the original output.
---
Rebuttal Comment 1.1:
Comment: After thoroughly reviewing the authors' rebuttal, I am satisfied that my original concerns have been addressed. I particularly appreciate the additional experiments with Diffusion Transformers (SD3), and the clear evidence supporting the similarity of the hidden states.
In light of this, I have decided to raise my rating to "Weak Accept."
---
Reply to Comment 1.1.1:
Comment: Thanks for your valuable feedback and time in reviewing my work, your insights are greatly appreciated. | Summary: This paper proposes AsyncDiff, a plug-and-play acceleration scheme that enables model parallelism across multiple devices. The core method involves dividing the diffusion model into multiple components and executing the inference in parallel. This is facilitated by the high similarity between hidden states in consecutive diffusion steps. With AsyncDiff, it is claimed that off-the-shelf diffusion models can be effectively accelerated with negligible degradation. Therefore, the key contribution of this paper lies in providing a simple and effective method to accelerate the diffusion process through parallelism.
Strengths: 1. The proposed method is well-motivated, and the effectiveness of AsyncDiff has been evaluated on multiple models, making the results compelling.
2. This method should be useful for video generation, which can take minutes or hours to produce a single video. The paper also supports this with experiments conducted on Stable Video Diffusion.
3. As shown in Table 5, it seems that the communication cost can be covered by the inference cost, which is a favorable feature for model acceleration.
Weaknesses: 1. Some concepts, such as the “dependency chain” in ABS, are not well explained. It would be beneficial if the author could provide a minimal explanation for these concepts.
2. The paper appears to share a related idea with Distrifusion [1]. Could the author provide a more detailed discussion about the key differences between them? For example, the main difference seems to be the splitting schemes, where Distrifusion splits the data, and this approach splits the model. What is the main advantage of AsyncDiff?
3. Are there any failure cases for the proposed AsyncDiff?
[1] DistriFusion: Distributed Parallel Inference for High-Resolution Diffusion Models
Technical Quality: 4
Clarity: 3
Questions for Authors: Additional Questions:
1. Is this method helpful for large batch sizes?
2. Is this method applicable to more GPUs?
Confidence: 4
Soundness: 4
Presentation: 3
Contribution: 4
Limitations: I do not find any potential negative societal impact of this work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ## **Q1: Some concepts, such as the “dependency chain” in ABS, are not well explained. It would be beneficial if the author could provide a minimal explanation for these concepts**
Thank you for the valuable feedback. This is indeed a question that requires further explanation. In the diffusion process, there are two "dependency chains": one between time steps and the other between model blocks.
**Dependency between time steps:** The denoising process is sequential, meaning that denoising at timestep *t* depends on the completion of denoising at timestep *t-1*. This sequential dependency causes latency accumulation and limits the ability to perform parallel inference.
**Dependency between blocks:** The large denoising model is split into *N* components, each with similar computational loads. The input to the *n*th block depends on the output of the *n-1* th block, which poses challenges for running the entire denoising model in parallel.
Our AsyncDiff seeks to disrupt both the dependency chain between time steps and the dependency chain between blocks by leveraging the hidden state's similarity between adjacent steps, allowing for the parallel execution of the denoising process while closely approximating the results of the sequential process.
## **Q2: Could the author provide a more detailed discussion about the key differences between AsyncDiff and Distrifusion? What is the main advantage of AsyncDiff?**
Thank you for the comment. As you mentioned, the main difference between AsyncDiff and Distrifusion lies in their core approaches. AsyncDiff focuses on model parallelism, enabling the denoising model to run in parallel regardless of the input data. In contrast, Distrifusion emphasizes data parallelism, parallelizing the computation across different patches of the image.
Based on these differences, the advantages of AsyncDiff can be summarized as follows:
**1. Wider Applicability:**
AsyncDiff is a more universal acceleration framework because it focuses on model parallelism, independent of the type of input data or the task of the original model. This makes it suitable for various diffusion-based generation tasks, including text-to-image, text-to-video, image-to-video, super-resolution, ControlNet, speech generation, etc. It can also be used with both Unet-based and DiT-based diffusion models.
**2. Lower Memory Requirements:**
Distrifusion requires caching activation maps for each layer, significantly increasing GPU memory demands and complicating practical applications. In contrast, AsyncDiff only needs to cache a few hidden states on each device, keeping memory requirements nearly the same as the original model.
**3. Higher GPU Utilization:**
When the generated image resolution is not very high, Distrifusion can lead to lower GPU utilization, resulting in underutilization of computing resources. AsyncDiff mitigates this issue by not altering the size of the input data, thus maintaining efficient GPU usage. The experimental results in Table 3 of our submission show that AsyncDiff can achieve the same speedup with four GPUs as Distrifusion does with eight GPUs, while also delivering better generation quality.
### Table 1: Quantitative comparison Distrifusion on SD 2.1
*To ensure a fair comparison with Distrifusion, we increased the warm-up steps in our method to match the speedup ratio of Distrifusion, allowing us to fairly compare generation quality and resource costs.*
| **Method**| **Speed up ↑** | **Devices** | **MACs ↓** | **Memory ↓** | **CLIP Score ↑** | **FID ↓** | **LPIPS ↓** |
|-----|-------|----|---|----|-----|-----|-----|
| Original Model | 1.0x | 1 | 76T | 5240MB| 31.60| 27.87 | --|
| **Distrifusion** | 1.6x | **2** | **38T**| 6538MB | **31.59**| 27.89| **0.0178** |
| **Ours (N=2 S=1)** | 1.6x | **2** | 44T | **5450MB** | **31.59**| **27.79** | 0.0944 |
| **Distrifusion**| 2.3x | 4 | **19T**| 7086MB | 31.43| 27.97 | 0.2710 |
| **Ours (N=2 S=2)** | 2.3x | **3** | 20T | **5516MB**| **31.49**| **27.71** | **0.2117** |
| **Distrifusion**| 2.7x | 8| **10T**| 7280MB| 31.31 | 28.12 | 0.2934 |
| **Ours (N=3 S=2)**| 2.7x | **4** |14T| **5580MB**| **31.40**| **28.03** | **0.1940** |
## **Q3: Are there any failure cases for the proposed AsyncDiff?**
As an acceleration method, AsyncDiff depends on the quality of the original base model. If the base model's generation quality is not good, AsyncDiff will also produce unsatisfactory results. While our method can significantly speed up the process and maintain the original generation quality, it typically cannot enhance the base model's original generation quality.
## **Q4:Is this method helpful for large batch sizes?**
Thanks for the comment. As shown in the following Table 2, AsyncDiff will still be helpful for large batch sizes.
### Table 2: Acceleration and Latency under Different Batchsizes
*We evaluate the running speed of AsyncDiff on SD 2.1. The latency is evaluated using Nvidia RTX A5000 GPUs, CUDA 12.0, and PyTorch 2.1.*
| BatchSize|Original|N=2 S=1|N=4 S=1|N=3 S=1|N=2 S=2|N=3 S=2|
|----|-----|-----|-----|-----|-----|-----|
| 1| 1.0x(5.51s)| 1.8x(3.03s) | 2.3x(2.41s)| 2.6x(2.10s)| 3.0x(1.82s)| 4.0x(1.35s)|
| 4| 1.0x(19.34s)| 1.7x(11.46s)| 2.2x(8.60s)| 2.5x(7.85s) | 2.9x(6.75s)| 4.1x(4.77s)|
| 8| 1.0x(36.19s)| 1.7x(21.46s)| 2.2x(16.44s)| 2.4x(14.90s) | 2.8x(12.75s)| 4.0x(9.06s)|
## **Q5:Is this method applicable to more GPUs?**
Thanks for the comment. Asyncdiff can be applicable to more GPUs. However, we do not recommend splitting the model into more than four components, as the speed gains are minimal. In practice, combining AsyncDiff with other data parallel methods is more effective. Asyncdiff is compatible with any other data-parallel solution (batch splitting, patch splitting). Taking advantage of model parallelism and data parallelism at the same time will achieve a more extreme speedup ratio while minimizing the sacrifice of generation quality. | Summary: This work introduces an acceleration method for diffusion models by distributing the model blocks across multiple GPUs and running different blocks asynchronously. The core motivation of this paper lies in the observation that the hidden states of a block exhibit high similarity across consecutive diffusion steps. This enables the execution of different blocks based on pre-computed results from earlier steps. The proposed method was evaluated on Stable Diffusion v2.1, v1.5, and XL, achieving a 1.8x to 4x acceleration with minimal performance loss on CLIP score and FID.
Strengths: 1) The idea of parallelizing the inference of different blocks is quite interesting. The method is general and can be applied to several diffusion models like Stable Diffusion, Stable Video Diffusion, and AnimateDiff. The method shows robustness to different speed-up ratios as shown in Tables 3, 4, and 5.
2) Extensive experiments were conducted in this paper, encompassing both image generation and video generation. And the results are good: the proposed method as able to achieve significant acceleration with slight performance degradation.
Weaknesses: 1) Is the communication cost huge compared to the inference cost? As shown in Table 3, the number of GPUs used in this paper is primarily 2 or 3. Will there be issues if this is extended to 8 or 16 GPUs? Is this method still competitive when the number of devices exceeds 8, where cross-node communication becomes substantial?
2) The experimental section mainly focuses on quality metrics such as FID and CLIP Score. It would be beneficial if the authors could provide more analytical results about the method, such as the similarity of hidden states.
3) Figure 5: “Quantitative” should be “Qualitative”.
Technical Quality: 3
Clarity: 4
Questions for Authors: 1) What happens if the total number of timesteps is not divisible by the number of GPUs? Is the proposed method still applicable in this scenario?
2) Is the proposed method still effective on low-end GPUs or edge devices?
Confidence: 5
Soundness: 3
Presentation: 4
Contribution: 4
Limitations: N/A.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ## **Q1: Is the communication cost huge compared to the inference cost? Will there be issues if this is extended to 8 or 16 GPUs? Is this method still competitive when the number of devices exceeds 8, where cross-node communication becomes substantial?**
Thanks for the valuable comment, this is indeed a question that needs further explanation
As demonstrated in Table 1, the communication cost is minimal compared to the inference cost.
Applying to 8 or 16 GPUs can lead to increased communication costs due to GPU architecture. In a common 8-GPU node, GPUs 0, 1, 2, and 3 form one group, while GPUs 4, 5, 6, and 7 form another. Cross-group communication is about 40% more costly than within a group, the cross-node cost is even higher. This is a shared challenge for all distributed inference frameworks, both in the Diffusion and LLM areas.
Our proposed stride denoising technique effectively alleviates this issue by reducing communication frequency. As shown in Table 2, during a 50-step diffusion process, AsyncDiff requires only 25 communications. This significant reduction means our method is less affected by increased cross-group or cross-node communication costs, keeping it competitive. In the future, we will explore more innovative technologies to further address this general challenge.
### Table 1: Time cost comparisons on SD 2.1
*Note: 'Overall' represents the overall latency, 'Running' represents the time cost of model running, 'Comm.' represents the communication time cost. 'Ratio' in this table represents the proportion of communication cost to overall latency. All measurements were conducted on NVIDIA A5000 GPUs.*
| **Configuration** | **Devices** | **Overall** | **Running** | **Comm.** | **Ratio** |
|------------|-------------|-------------|-------------|-----------|------------|
| AsyncDiff N=2 S=1 | 2 GPUs| 3.03s | 2.90s | 0.13s | 4.30% |
| AsyncDiff N=3 S=1 | 3 GPUs| 2.41s | 2.18s | 0.23s | 9.54% |
| AsyncDiff N=4 S=1 | 4 GPUs| 2.10s | 1.80s | 0.30s | 14.29% |
| AsyncDiff N=2 S=2 | 3 GPUs| 1.82s | 1.70s | 0.12s | 6.59% |
| AsyncDiff N=3 S=2 | 4 GPUs| 1.35s | 1.25s | 0.10s | 7.40% |
### Table 2: Effect of stride denoising in the 50-step Diffusion Process
*Note: Stride denoising significantly lowers communication costs by decreasing the communication frequency*
| **Configuration** |Communication Nums↓| Communication Freq↓|
|------------|-------------|-------------|
| *AsyncDiff* w/o stride denoising | 49 times | 0.98/timestep|
| *AsyncDiff* w/ stride denoising | **25 times** | **0.50/timestep**|
## **Q2: It would be beneficial if the authors could provide more analytical results about the method, such as the similarity of hidden states.**
Thank you for your valuable comment. We provide quantitative analysis and visualization of the hidden state similarities in **Figure 2** and **Figure 3** of the **attached PDF FILE**.
**Figure 2** shows the cosine similarity and MSE for each block's hidden state between adjacent steps. As illustrated, the hidden states are highly similar between most steps, except at the initial stage. This high similarity enables us to use features from the previous step to parallelize calculations. **Figure 3** further visualizes this phenomenon by displaying the hidden states of adjacent steps in the diffusion process. The initial instability demonstrates why slightly increasing the warm-up step can make the AsyncDiff output image pixel-wise consistent with the original output, which is also shown in Table 2 in our submission.
## **Q3: Figure 5: “Quantitative” should be “Qualitative”.**
Thanks for the careful review. We will correct this error in the next version.
## **Q4: What happens if the total number of timesteps is not divisible by the number of GPUs? Is the proposed method still applicable in this scenario?**
Thank you for the valuable comment. In this case, AsyncDiff will not face any issues. Although we split the denoising model into *N* parts, we still obtain one timestep of noise after each parallel batch. However, the runtime for this step in the denoising model is reduced to *1/N*.
This is supported by the experimental results in Table 1 of our submission. When accelerating a 50-step diffusion model, AsyncDiff performs well, even when using 3 or 4 GPUs.
## **Q5: Is the proposed method still effective on low-end GPUs or edge devices?**
Thank you for the valuable comment. This is a very practical question.
We conducted additional experiments to show the acceleration ratio of AsyncDiff on consumer-grade graphics cards (NVIDIA RTX 2080 Ti GPUs). As shown in Table 3, our method still achieved a significant speedup on these GPUs.
### Table 3: Acceleration Ratio and Latency on low-end GPUs
*Note: Acceleration Ratio and Latency are evaluated using CUDA 12.0 and PyTorch 2.1*
| **GPU** | **FP16 Compute** |Original|N=2 S=1|N=3 S=1|N=2 S=2|N=3 S=2|
|-------------|------------------|-------------|-------------|-------------|-----------|-----------|
| NVIDIA RTX 2080Ti | 54 TFLOPS| 1.0x(8.20s)| 1.7x(4.91s) | 2.0x(4.08s) | 2.8x(2.94s)| 3.5x(2.35s)|
---
Rebuttal Comment 1.1:
Title: Final Rating
Comment: The rebuttal from the author has addressed all my concerns, thus I would like to keep my rating as accept.
---
Reply to Comment 1.1.1:
Comment: Thank you for the insightful and constructive comment. Your expertise and time are greatly appreciated in helping improve my work's quality. | Summary: This paper proposes AsyncDiff to enable diffusion model parallelism across multiple devices and achieve a very impressive speedup with negligible degradation. Specifically, the denoising model is split into several components, each assigned to a different device. The conventional sequential denoising process is transformed into an asynchronous one by exploiting the high similarity of hidden states between consecutive time steps, enabling each component to compute in parallel.
Strengths: 1. AsyncDiff is a universal and plug-and-play acceleration scheme and can enable model parallelism across multiple devices. The acceleration that AsyncDiff achieves on image and video diffusion models is impressive
2. The motivation is clear and the experiments are comprehensive.
Weaknesses: 1. The discussion and comparison with the existing diffusion sampler is not sufficient, whether AsyncDiff can be combined with other diffusion samplers such as DPM-Solver.
2. In the absence of an explanation of the GPU, and AsyncDiff's ablation experiments on different GPU devices, can other GPUs achieve a good acceleration ratio?
Technical Quality: 4
Clarity: 3
Questions for Authors: I do not have other questions about this paper.
Confidence: 3
Soundness: 4
Presentation: 3
Contribution: 3
Limitations: As stated by the author, AsyncDiff necessitates frequent communication between devices throughout the denoising process. Therefore, if the devices lack the capability to communicate effectively, AsyncDiff may not perform optimally.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ## **Q1: Whether AsyncDiff can be combined with other diffusion samplers such as DPM-Solver**
Thanks for the valuable feedback. AsyncDiff is a universal method that can be used with various samplers, including the DPM-Solver. Table 1 below provides the quantitative evaluation of AsyncDiff on the SD 2.1 with DPM-Solver sampler. With the same speedup ratio, AsyncDiff significantly improves generation quality compared to the baseline. We also provide the qualitative results in **Figure 1 (a)** of our **attached PDF File**. Our method achieves significant acceleration while maintaining higher consistency with the original output.
### Table 1: Quantitative evaluations of AsyncDiff using DPM-Solver sampler
*Note: We compared the generative quality when using the DPM-Solver sampler and AsyncDiff to achieve the same speedup. The SD 2.1 using a 25-step DPM-Solve sampler is the baseline.*
| **Method** | **Speed up ↑** | **MACs ↓** | **CLIP Score ↑** | **FID ↓** |
|----------------------|----------------|------------|------------------|-----------|
| SD 2.1 DPM-Solver 25steps | 1.0x | 76T | 31.57 | 28.37 |
| SD 2.1 DPM-Solver 15steps | 1.6x | 46T | 31.52 | 28.89 |
| **Our AsyncDiff (N=2 S=1)**| 1.6x | **38T** | **31.58** | **27.71** |
| SD 2.1 DPM-Solver 10steps | 2.2x | 30T | 31.29 | 29.28 |
| **Our AsyncDiff (N=3 S=1)**| 2.2x | **25T** | **31.36** | **28.20** |
## **Q2: Can other GPUs achieve a good acceleration ratio?**
Thanks for the valuable comments. We conducted additional experiments to compare the acceleration ratio of AsyncDiff on three different GPUs (NVIDIA RTX A5000, NVIDIA RTX 3090, and NVIDIA RTX 2080 Ti). As shown in Table 2, our method achieved a significant speedup across all tested GPUs.
### Table 2: Acceleration Ratio and Latency on Different GPUs
*Note: Acceleration Ratio and Latency are evaluated using CUDA 12.0 and PyTorch 2.1*
| **GPU** | **FP16 Compute** |Original|N=2 S=1|N=3 S=1|N=2 S=2|N=3 S=2|
|-------------|------------------|-------------|-------------|-------------|-----------|-----------|
| NVIDIA RTX A5000 | 117 TFLOPS| 1.0x(5.51s)| 1.8x(3.03s) | 2.3x(2.41s) | 3.0x(1.82s)| 4.0x(1.35s)|
| NVIDIA RTX 3090 | 71 TFLOPS| 1.0x(5.61s)| 1.8x(3.20s) | 2.1x(2.65s) | 2.9x(1.91s)| 3.5x(1.60s)|
| NVIDIA RTX 2080Ti | 54 TFLOPS| 1.0x(8.20s)| 1.7x(4.91s) | 2.0x(4.08s) | 2.8x(2.94s)| 3.5x(2.35s)|
---
Rebuttal Comment 1.1:
Comment: The authors have posted an effective rebuttal that resolves my concerns. I still vote for a weak acceptance of this work.
---
Reply to Comment 1.1.1:
Comment: We sincerely appreciate your dedicated time and effort in reviewing our submission. Your valuable feedback is greatly appreciated. | Rebuttal 1:
Rebuttal: Dear Chairs,
We sincerely appreciate the time and effort you have spent evaluating our submission, and we look forward to the discussion stage. We will include the review stage results in the appendix of our next version.
Pdf: /pdf/e42b2193df26cdfbec8899c3a3ff728ae36e1146.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
AP-Adapter: Improving Generalization of Automatic Prompts on Unseen Text-to-Image Diffusion Models | Accept (poster) | Summary: This paper proposes a new task called MGAPO, aimed at addressing the generalization problem of existing APO methods on unseen text-to-image generation models. To achieve this, the authors present a two-stage method called AP-Adapter. In the first stage, keyword prompts are generated through a large language model; in the second stage, an enhanced representation space is constructed by leveraging inter-model differences, and prompt representations are adjusted through domain prototypes to enable generalization to unseen models. Experimental results demonstrate that AP-Adapter can generate high-quality images on previously unseen diffusion models and outperform existing methods in both semantic consistency and aesthetic quality.
Strengths: 1 The introduction of the MGAPO task makes text-to-image model design more applicable to real-world scenarios.
2 The related concepts and differences are clearly articulated, especially in Figure 1.
Weaknesses: 1 The core of this paper lies in the computation of domain prototypes, which are influenced by CLIP. However, the authors did not conduct relevant ablation experiments.
2 Figure 4 includes human evaluation, which introduces a degree of subjectivity. Additionally, previous methods like sub-adapter did not conduct similar validation. Furthermore, the authors seem to lack detailed explanations regarding human validation in this paper.
3 The proposed method is relatively complex, introducing additional models such as a pre-trained LLM and CLIP for prototype computation, which increases overall complexity and training/inference time. The authors did not discuss these aspects.
4 AP-Adapter still requires a large number of manually designed prompts as references during training, which may increase the workload and complexity of data preparation.
Technical Quality: 2
Clarity: 3
Questions for Authors: see Weaknesses
Confidence: 4
Soundness: 2
Presentation: 3
Contribution: 3
Limitations: The authors mentioned in the Limitation section that the main drawback of this paper is the small size of the dataset used, which is focused on single-character images. This limits the model's performance. Furthermore, the authors explained that as the dataset size expands, the number of domain prototypes might increase.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the constructive comments. We will address the concerns below.
Q1. The authors did not conduct relevant ablation experiments on CLIP.
A1. The computation of domain prototypes indeed relies on the image encoder. Moreover, domain prototypes are concatenated with text representations to serve as control conditions for image generation, so they should be as similar to text representations as possible. Given that stable diffusion 1.5 uses CLIP as the text encoder, we also use CLIP as the image encoder, ensuring that the image representations share a feature space with the text representations.
In the appendix, we explore the impact of CLIP on domain prototypes. The experiments include Figure 8 and Table 4 in section C.2.
Q2. The human validation lack detailed explanations.
A2. We followed the approach presented in the SUR-adapter[1]. We collected 108 valid questionnaires. Participants were shown images generated by our method and baseline methods, along with their corresponding descriptions. They were asked to choose the better image based on the questions "Which image do you think has higher quality?" and "Which image do you think better matches the text description?" We then compiled and analyzed the survey results.
Q3. The proposed method introduces additional models such as a pre-trained LLM and CLIP for prototype computation.
A3. The LLM is only used in the first stage for automatic prompt generation, employing the ICL strategy, and does not participate in subsequent training. The LLM’s model parameters are not updated. During training, the parameters of CLIP are frozen and do not get updated. During inference, CLIP is no longer used; only the trained domain prototypes are involved. Therefore, training and inference times are not significantly increased.
Q4. The data collection process adds to the workload and complexity of the proposed method.
A4. All existing APO methods require a large number of manually designed prompts as references for training.
Moreover, such data collection and retraining should be repeated for the APO on each newly-coming diffusion model.
Our method is desined to address this issue by eliminating the need for new data preparation and retraining when encountering new diffusion models. Our trained adapter can be directly used on new models without retraining.
[1] Zhong S, Huang Z, Wen W, et al. Sur-adapter: Enhancing text-to-image pre-trained diffusion models with large language models[C]//Proceedings of the 31st ACM International Conference on Multimedia. 2023: 567-578.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response. It addressed some of my concerns, and I am willing to increase my confidence level.
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer,
We greatly appreciate your detailed review and insightful comments. | Summary: The paper proposes model-generalized automatic prompt optimization (MGAPO), an automatic prompt optimization (APO) method which trains on a set of known models to enable generalization to unseen models during testing. MGAPO presents significant challenges, a perspective missing in previous methods. MGAPO includes a two-stage prompt optimization method. In the first stage, a LLM is used to rewrite the prompts and generate keyword prompts. In the second stage, the methods leverage inter-model differences (derived from varied checkpoints) to captures the characteristics of multiple domains and store them as domain prototypes. These prototypes serve as anchors to adjust prompt representations, enabling generalization to unseen models. The optimized prompt representations are subsequently used to generate conditional representations for controllable image generation.
Strengths: - The paper presents a new perspective to the automatic prompt optimization at the feature level.
- Authors have shown the effect of each loss component in Table 2.
Weaknesses: - I believe the quality of writing can be improved. The language of paper seems unnecessarily complex and hard to comprehend at certain places especially the methodology section.
- The problem solved in the paper seems more of an incremental learning problem than domain generalization. The training and target domains differ only in terms of number of images seen. They do not differ in term of style or content as per paper. What are the views of authors about this?
- I can see in figure 2 that the UNET of diffusion model is freezed. If that's the case, then it renders the diffusion loss $\mathcal{L}_d$ in equation 11 useless. What exactly is being updated in equation 11? The $\theta$ parameters of UNET are freezed.
- In Fig 3 (row 3), the proposed method doesn't seem to produce optimal results with DreamShaper. The cat is supposed to be standing as per the prompt, however, the cat generated by the proposed method seems to be sitting. Compared to proposed method, the baselines seems to be working better in this case.
- Fig. 2 seems to be hastily drawn. There are many things which can improved in the figure (minor: "M" of LLM block is going out of the box). Also the font of in-context learning block is very small
- The gains in Table 1 seem to be average compared to baselines. In fact baselines are better in certain cases. Given the complexity of the proposed method, the gains are expected to be larger, which is not the case.
Overall, the authors are encouraged to answer the above queries and provide solid explanations
Technical Quality: 2
Clarity: 2
Questions for Authors: Please check the weaknesses section.
Confidence: 4
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: Authors have provided the limitations statement.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the constructive comments. We will address the concerns below.
Q1. The quality of writing can be improved.
A1. Thank you for your suggestions. We will optimize the writing in the methodology section to make it more concise and easy to understand.
Q2. What’s the difference between the training and target domains?
A2. In our dataset, each checkpoint is fine-tuned using images of different styles and content, resulting in varying generative capabilities. During experiments, we use a subset of these checkpoints as the training set and test on another subset. The fundamental difference between source and target domain checkpoints lies in the style and content of the fine-tuning data, not the number of the data.
Q3. What exactly is being updated in equation 11?
A3. In Equation 11, the parameters of $ q_i^k $ will be updated. $ q_i^k $ is the output of $ g_{\text{ada}} $ in Equation 9. Thus, the variables being updated are the parameters of the transformer layers and linear layers in the adapter.
Q4. Compared to proposed method, the baselines seems to be working better in Figure 2.
A4. We found that the issue lies in the ICL-based prompt rewriting in the first stage. As shown in Appendix C.1, the prompts rewritten by the large language model do not include the word "standing." This indicates that while the adapter in the second stage can enhance the generalization of the automatic prompts to new models, it cannot compensate for the information lost by LLM in the first stage. Therefore, the quality of the automatic prompts is also crucial.
Q5. Fig. 2 need to be improved.
A5. Thank you for your advice. We will improve the quality of the figures and provide clearer figures in the final version.
Q6. Given the complexity of the proposed method, the gains are expected to be larger, which is not the case.
A6. Compared to baseline methods, our approach requires significantly fewer parameters to be trained, limited only to the domain prototypes and adapter. Therefore, our method is not more complex. Additionally, our approach performs better than other methods in most cases, as shown in Table 1 and Figure 2.
---
Rebuttal 2:
Comment: Thank you for your response. Based on the response of the authors, I have few more queries below:
- Q2: Authors mention that images sourced until checkpoint 40 are taken as source data and from checkpoint 41 as target data and they differ in content and style. How do authors ensure the style and content of target images are different. They are still from the same data distribution, hence more clarity is needed at this point.
- Q4: Authors argue that LLMs inability to generate the correct pose for cat is the reason behind the incorrect pose of cat in the final image. This suggests that the proposed method is heavily dependent on the quality of the prompt generated by the LLM. How do authors aim to improve or get rid of this dependency? or is there any alternate way to get around such issues
- Q6: I think authors want to point out at Fig.3 (Fig 2 is a block diagram). Kindly request the authors to put correct references. The scores are still marginal. I do not see any steep jump in scores of semantic consistency. For example proposed method has a color score of 0.477 while SUR adapter has 0.472 which is very marginal increment. Same is with other cases as well.
---
Rebuttal Comment 2.1:
Comment: We appreciate your reply. We will address the concerns below.
Q2. How do authors ensure the style and content of target images are different?
A2. In the dataset we collected, each image is annotated with the source checkpoint that generated it and the manually designed prompt used for its generation. The style and content of the image are determined by these two factors: the source checkpoint and the prompt, respectively.
Regarding style, each checkpoint is uploaded to Civitai.com by users and is typically fine-tuned with users’ private data or data of particular interest. As a result, the fine-tuning data for different checkpoints can be viewed as coming from different data distributions. This variation in fine-tuning data leads to differing generative capabilities across checkpoints, resulting in differences in the styles of the images they generate.
Regarding content, the prompt-image pairs we collected also come from different checkpoints, and these prompts are usually designed by users to highlight the generative capabilities of their respective checkpoints in specific domains. Figure 14 in the Appendix illustrates the differences among prompts, demonstrating that prompts from different checkpoints exhibit significant differences in content. As shown in Figure 14(d), different colors represent prompts from different checkpoints, and we can observe that the same colors tend to cluster together, indicating a clear semantic distinction between the prompts of different checkpoints.
Therefore, in our dataset, the images generated from different checkpoints typically differ in both style and content and cannot be considered as coming from the same data distribution.
Q4. How do authors aim to improve or get rid of the dependency of the quality of automatic prompt? or is there any alternate way to get around such issues.
A4. Our work primarily focuses on improving the generalization capability of automatic prompts across different checkpoints, with less emphasis on their generation. For the generation of automatic prompts, we only simply use a LLM without fine-tuning. To address the dependency of the quality of the automatic prompts, we can consider it from two perspectives:
1. Use the natural language prompt as an extra input of the adapter. In this way, by introducing some specially-designed modules into adapter, the adapter can have additional capabilities for semantic completion of automatic prompts, thereby reducing its reliance on the first stage.
2. Develop a more effective automatic prompt generation module. By jointly training this module with the adapter, we can enhance the module's ability to capture semantics, ensuring that the generated prompts have higher semantic quality while better aligning with the generalization requirements of the adapter.
Q6. In terms of semantic consistency, the improvement of the method in this paper over SUR-adapter is modest.
A6. We apologize for the earlier misreference; the correct figure is Figure 3.
Among the baselines, SUR-adapter mainly focuses on enhancing semantic consistency between prompts and images on a specific diffusion model by using an adapter. Our proposed method also employs an adapter but focuses on improving its generalization capability. Consequently, although our method offers only modest gains in semantic consistency compared to the SUR-adapter, it achieves a substantial improvement in image quality, as demonstrated in Table 1.
Among other baselines, PromptPerfect focuses on improving image quality and achieves an Aesthetic Score of 6.249, which is close to our score of 6.384. However, our method significantly outperforms PromptPerfect in semantic consistency.
In summary, our method achieves strong performance in both semantic consistency and image quality, whereas previous methods did not achieve both.
---
Rebuttal 3:
Comment: Dear Reviewer,
We sincerely appreciate your engagement in the discussion and your recognition of our rebuttal efforts. Thank you very much for your willingness to increase the score.
Since the rating in the original review has not yet been updated, we are unsure whether this is an oversight and would like to kindly confirm if the rating has been changed in the system.
Thank you once again for your time and valuable comments.
---
Rebuttal 4:
Comment: Thank you for further clarification. I appreciate authors putting more work on explaining the things. I am increasing my score to weak accept. The paper is good however, the increments as compared to the baselines are not that great. | Summary: 1. The authors propose model-generalized automatic prompt optimization (MGAPO), which targets the effectivenss of automatic prompts on unseen models.
2. The authors propose AP-Adapter which include in-context learning based prompt rewriting and prototype-based prompt adaptation.
3. The authors build a multi-modal, multi-domain dataset for training and evaluation.
Strengths: 1. The authors introduce the process of collecting and creating the multi-modal multi-domain dataset for training and evaluation.
2. The experimental results show that the proposed method outperforms existing baseline models.
3. The paper includes extensive further analyses and ablation studies to deepen the understanding of the proposed method.
Weaknesses: 1. While the proposed method outperforms other baselines, its increased performance appears quite marginal, particularly considering the significant gap compared to the performance of manual prompts. I am curious about the main reasons for this substantial gap between automatic and manual prompts, and what fundamental limitations of automatic prompting hinder its improvement. Are there any promising directions to overcome these limitations? Can we consider the proposed method as addressing some of these limitations?
2. The proposed prototype-based adaptation utilizes the concept of DomainDrop in a reverse manner. DomainDrop aims to learn domain-invariant features by eliminating domain-sensitive information. Conversely, the proposed method removes domain-insensitive information to construct domain prototypes. However, in the context of domain generalization, domain-insensitive information is often considered highly valuable, as it contains information shared across different domains, which is crucial for generalizing to unseen domains. Therefore, I am curious if the process of removing domain-insensitive information in the proposed method could result in the loss of important information. Additionally, relying on domain prototypes may lead to situations where a model cannot properly handle data from completely new domains that cannot be adequately matched with existing domains. In this case, we need to consider the difficulty of continuously increasing the number of domain prototypes, as this would require an increase in model size as well.
3. While using different checkpoints, the data used for training and testing are consistent in that they are all collected from CIVITAI and fine-tuned based on SD1.5. However, the pre-trained data used for other baseline methods does not necessarily share these characteristics. I am curious if this could have introduced some biases in favor of the proposed method, leading to better results.
Technical Quality: 3
Clarity: 2
Questions for Authors: Please refer to the "Weaknesses" section.
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: This paper includes a specified section for limitations and the authors adequately addressed them.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the constructive comments. We will address the concerns below.
Q1-1. What’s the the main reasons for this substantial gap between automatic and manual prompts?
A1-1. The effectiveness of manual prompts lies in their iterative nature, allowing humans to refine prompts based on the generated images until the desired quality is achieved. In contrast, automatic prompts aim to replicate this effectiveness by learning patterns from manual prompts.
Q1-2. What fundamental limitations of automatic prompting hinder its improvement?
A1-2. The key limitation is that APO models struggle to evaluate whether an image meets specific requirements and to re-optimize the prompts accordingly.
Q1-3. Are there any promising directions to overcome these limitations?
A1-3. One potential solution is to incorporate reinforcement learning by designing a reward model that simulates human judgment. This would enable the model to dynamically identify and correct deficiencies in the current prompts based on the generated images.
Q1-4. Can we consider the proposed method as addressing some of these limitations?
A1-4. Our approach does not focus on generating superior automatic prompts but rather on enhancing the generalization of automatic prompts across different checkpoints, thereby avoiding the need for repetitive retraining of the prompt generation model for new checkpoints.
Q2-1. If the process of removing domain-insensitive information in the proposed method could result in the loss of important information?
A2-1. As shown in Figure 2, in the adapter section, the final adapted embedding is obtained by concatenating the prompt embedding and the Prototype-anchored embedding, followed by transformation. During this process, domain-insensitive information remains in the prompt embedding and is not completely removed, ensuring that important semantic information is not lost.
Q2-2. Is it necessary to continuously increase the number of domain prototypes to adapt to new data?
A2-2. We do not require the target domain to have perfectly matching domain prototypes. Instead, we assume that the target domains are entirely new and unseen, aiming to anchor their position using existing domain prototypes. To achieve this, we selected checkpoints with diverse styles as our training data to maximize the adaptability of the prompt representations. Consequently, we do not need to continually increase the number of domain prototypes.
Q3. If data could have introduced some biases in favor of the proposed method, leading to better results.
A3. Our basic assumption is that all checkpoints must be with the same structure and fine-tuned based on the same base SD model using different data. Our method addresses the generalization of automatic prompts across different checkpoints. If the target model is with entirely different structure from the training model, such as SD XL or DALLE, the problem itself becomes unsolvable, as we cannot predict the behavior of an entirely unknown model. Generalization is only possible on checkpoints derived from the same model but fine-tuned with different data.
---
Rebuttal Comment 1.1:
Title: Please respond reviewer redB
Comment: Reviewer redB:
Please let us know whether the authors have addressed your concerns.
Thanks.
-AC
---
Rebuttal Comment 1.2:
Comment: Thank you to the authors for their responses. They have addressed some of my concerns. However, I still find the performance improvement of the proposed method over the baselines to be marginal, especially considering the significant difference when compared to manual prompting, particularly in terms of Blipscore and ImageReward. Most of all, I am concerned about the fairness of the comparison with other baselines, as the characteristics of effective prompts can be dependent on the choice of diffusion models and the content and style of the targeted images. In the present setting, the data used for training and testing are consistent in that they are all collected from CIVITAI and fine-tuned based on SD1.5. However, the proposed method is the only method optimized for this specific setting, whereas the other baselines are not. This could give a comparative advantage to the proposed method, making it difficult to assert that the evaluation is fair. For these reasons, I maintain my previous rating.
---
Reply to Comment 1.2.1:
Comment: Dear Reviewer,
We sincerely appreciate your valuable comments and the recognition of our rebuttal efforts. We would like to further clarify on your mentioned two issues:
1. **Regarding the good performance of manual prompting**, this can be attributed to three main factors: iterative selection of prompts (a repeated process of prompt adjustment and image generation), iterative selection of generated images (a repeated process of selecting images over multiple generations), and the possible use of auxiliary tools like ControlNet. In contrast, our method and other baselines do not involve such selection processes or the use of auxiliary tools.
2. **Regarding the fairness of the comparison**, this can be explained from three aspects:
- Firstly, both our method and other baselines are trained on our collected data. For example, "Prompt Perfect" is trained with the data from the checkpoint DreamShaper.
- Secondly, although all these data are collected from CIVITAI, they are uploaded by different users and generated using private fine-tuned checkpoints. Our experiments also show that data from different checkpoints exhibit significant domain differences and are not from the same distribution. Therefore, the data used for training and testing are not consistent.
- Thirdly, our goal is to train an adapter on multiple checkpoints to achieve better generalization on unseen checkpoints. However, existing baseline methods cannot accomplish this and fail to produce an adapter with sufficient generalization capability. Therefore, the difference between the two training settings (as shown in Figure 1) also highlights how our method addresses the shortcomings of previous baseline methods, meeting the need to avoid retraining the adapter for new checkpoints. | null | null | null | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Recurrent neural networks: vanishing and exploding gradients are not the end of the story | Accept (poster) | Summary: This paper studies the vanishing and exploding gradient phenomenon in RNNs. In particular, it answer the question: "is solving those issues really enough to ensure well-behaved loss landscapes?" This paper reveals the importance of the element-wise recurrence design pattern combined with careful parametrizations in mitigating the scale issue of hidden states.
Strengths: 1. It is good to see people discuss the complex eigenvalue setting as previous work seems to be limited to real eigenvalues (such as HiPPO)
2. The teacher-student analysis is well-presented and clarifies the understanding of complex systems within this context.
Weaknesses: 1. The assumption of inputs being wide-sense stationarity seems strong. Does it include the setting such as the Cifar10 dataset?
2. The argument of input normalization seems hand-waving.
1. When $\lambda \to 1$, the forward pass and backward pass are exploding with a different speed: Forward is asymptotically $1/(1-\lambda)$ while backward is (at least) asymptotically $1/(1-\lambda)^2$.
2. It should be proper to say normalization can relax one scale issue but it cannot achieve the scaling both at the same time.
3. The paragraph "On the Importance of Adaptive Learning Rates" addresses an essential topic but lacks direct training curves.
1. While the local Hessian analysis provided is valuable, it doesn’t assure improved global performance.
2. It would be more effective and indeed necessary to include training curves comparing results using SGD and adaptive learning rates (such as AdamW). This would clearly illustrate the advantages and need for adaptive optimizers.
4. Related works on parameterizations and orthogonal matrix is limited:
1. https://proceedings.mlr.press/v80/zhang18g.html
2. https://arxiv.org/abs/1909.09501: This work proposes a parameterised approach to train orthogonal matrix.
3. https://arxiv.org/abs/1901.08428
4. https://openreview.net/pdf?id=ryxepo0cFX
5. https://arxiv.org/abs/2311.14495: This work proposes the relationship between parameterization and gradient in RNNs/SSMs.
6. https://arxiv.org/pdf/2006.12070
5. I am willing to increase the score if the above issues are addressed.
Technical Quality: 2
Clarity: 2
Questions for Authors: 1. On Page 4, Equation 5 focuses on the second-order expectation of $h_t$. It would be helpful to understand the specific reason for choosing a second-order expectation over a first-order one. Generally, implementations do not involve the second moment of hidden states. Why is the first-order expectation not considered in this case? Does first-order suffer from the same problem?
1. A similar forward-pass blow-up has been proved in the LRU(Orvieto, 2023) paper, proposition 3.3. What is the difference in terms of assumption and technique between your result and the LRU result?
2. On Page 5, in the paragraph titled "What About Complex Numbers?", you discuss the concept of polar parametrization. This raises an important question: do all parametrizations encounter the same issues as the polar parametrization? If they do, demonstrating this could effectively highlight the limitations of using complex numbers in a polar setting. If not, it may be worthwhile to find the good parameterization. Currently, the exploration of polar parametrization seems quite restricted.
3. Your title is "vanishing and exploding gradients are not the end of the story". A question based on the saying in Page 9 Figure 5 Panel C is:
1. Does layer normalization and batch normalization resolves the all problems?
2. If not, I think it's hard to claim "Layer normalization keeps the overall gradient magnitude under control".
Confidence: 4
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: 1. This paper investigates how the hidden states of recurrent models contribute to learning long-term memory. There is very little research available to explain why the mamba model, which uses both gating and state-space models, is successful.
1. The discussion on Gated RNNs in lines 205-220 lacks rigorous substantiation, relying primarily on speculative assertions to illustrate the impact of gating mechanisms.
2. The paper does not clearly establish whether the proposed methodology is effective within a gated framework.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their thorough feedback. We address their concerns below.
**Weaknesses**
1. **Wide-sense stationarity.** A discussion of this assumption was indeed missing. We address it in the global response. In short, this assumption is standard in the analysis of linear time filters (see e.g. the Proakis and Manolakis book) and is mostly for convenience in making calculations easier (see e.g. the classical Wiener–Khinchin theorem). When it is violated, our theory does not perfectly predict signal propagation, but our results hold qualitatively and almost quantitatively (c.f. Fig 1 and 3 of the PDF, which test our predictions in a practically relevant setting). This assumption is not a significant limitation of our theory, as using the empirical autocorrelation function leads to very similar results (see Fig 3 of the PDF, left col).
2. **Input normalization.** The reviewer accurately points out that the forward and backward passes (for the $\lambda$ parameters, but not for the inputs $x$) explode at different speeds and that normalization alone cannot solve it. Normalization is a way to ensure that signals, both hidden states and errors, stay constant over the network's depth. Therefore, its role is to keep the magnitude of the hidden states bounded. However, it does not entirely avoid the explosion of the recurrent parameter gradients, as hinted by the reviewer. This is where reparametrization is required. We hope that this clarifies our argument. If not, can the reviewer point out the source of confusion in this argument?
3. **Adaptive learning rates.** The Hessian analysis indeed only captures the geometry of the loss landscape around optimality. Outside these regions, this only corresponds to part of the Hessian and includes some approximation. However, we would like to point out that this approximation is very standard, e.g., [Sagun et al. 2018](https://arxiv.org/pdf/1706.04454), and we find that the insights we gained by studying the Hessian at optimality hold for the rest of training.
Regarding the training curves of SGD/Adam: This section was indeed missing some discussion of SGD. We found that it is almost impossible to train such recurrent networks with SGD. The Hessian at optimality gives us a hint as to why this happens: the Hessian on the recurrent parameters (e.g., $\lambda$) is much larger than the one on feedforward parameters (e.g., $D$). As a result, to keep SGD stable, we need tiny learning rates, and the feedforward parameters barely change. The training curves for SGD are therefore almost flat. On the contrary, Adam uses adaptive LRs for different parameters, thereby avoiding this problem. We will mention this point and include some training curves in the next version of the paper.
4. **References**. We thank the reviewer for these references.
**Questions**
1. **Why second-moment?** We focus on the second moment for multiple reasons. First, the first moment would be 0 as long as the input has a mean of 0, which is the traditional regime in deep learning. Second, this enables us to partially capture the norm of the different quantities at hand. Third, this quantity appears, up to some kind of data-dependent change of norm, in the Hessian, see e.g. Eq. 11. Finally, we would like to note that studying second-order quantities is standard in deep learning, a classical example is the seminal Glorot & Bengio initialization paper (2010).
Regarding the comparison with Prop 3.3 of the LRU: our Eq. 4 is indeed very close to their result. Here, we generalize it to more general input distributions.
2. **On complex numbers.** In our paper, we discuss both polar and real-imaginary parametrizations, which are arguably the most usual ones, and show that they suffer from explosion behaviors. We tried our best to find an "optimal" parametrization for the polar case, given that we know how to control the magnitude of the gradient w.r.t. $|\lambda|$. However, we found that the most natural choices all have some issues (cf. A.3.2). It does not mean that it is impossible to find a better parametrization, just that we could not. We are happy to consider any suggestions that the reviewer may have.
3. **On normalization.** In Fig 5, we show that overall gradient magnitudes remain bounded with layer normalization, hence “the overall gradient magnitude [is] under control”. However, this is not a proper solution as different hidden neurons share the same normalizers. It follows that a neuron with a high $\lambda$ will heavily influence the normalization factor and thus neurons with smaller $\lambda$ values will be ignored. We would also like to highlight that the LRU paper already includes a similar argument.
**Limitations**
First, we would like to point out how rich the learning dynamics of the simple time-invariant recurrent network are and that analyzing it in depth is not a trivial task. We are convinced that this analysis alone is already a strong contribution to the field. That said, the reviewer’s remark prompted us to analyze more carefully the gated case on our task of Sec 5. We study a GRU as it is the simplest gated architecture. We report those results in the global answer. Overall, we find that diagonality almost holds under standard initialization, for the reasons we mention in lines 205-220, and that our theory captures signal propagation in this regime. However, it is likely that, during training, the architecture becomes less linear and that our theory cannot capture signal propagation anymore. Such architectures have been studied using the mean field framework by Chen et al. 2018 [50]. Their result captures non-linearity but is arguably harder to interpret. We believe that our work provides a complementary and more intuitive understanding of signal propagation while being less general.
We hope that our rebuttal convinces the reviewer of the relevance and soundness of our work and that they may consider reevaluating our paper.
---
Rebuttal 2:
Comment: I have carefully reviewed both the reviews and the rebuttals.
However, I believe that the section on the wide-sense stationary assumption may not be essential for deriving the key results. Estimating hidden states could potentially be approached deterministically with bounded inputs, which might more effectively highlight the advantages of specific parameterizations, initializations, or normalizations.
While the inclusion of the stationary property facilitates a discussion of the second-order moment, this approach seems somewhat forced. The bound on the second-order moment within a random process context does not provide a significantly more valuable insight. Although generalizing to more diverse input distributions is a noteworthy contribution, it is crucial to clearly define the boundaries of this generalization. Without specifying the limits of input expansion, the contribution may not be fully strengthened. It remains unclear how this paper’s contribution stands apart from the LRU paper and what kinds of stronger understanding can be derived for complex model such as gated.
Therefore, I will increase my score from 4 to 5 and encourage the authors to address the issues related to the stationary assumption and to clarify the improvements over the LRU paper in the next version.
---
Rebuttal Comment 2.1:
Comment: We thank the reviewer for updating their score.
The bounded input assumption the reviewer is suggesting is an interesting one that we did not consider. Such an approach gives an upper bound on the quantities we study that would correspond to the case $\rho=1$ in our analysis (up to a multiplicative constant of course). Our approach has two main benefits compared to this one:
- It also provides a lower bound on the second-moments considered given that it is an equality
- Our upper bound is more precise when the auto correlation function decreases quickly (i.e. $\rho < 1$, c.f. Fig 1.A), which is the more realistic case (see Figure 1 in our rebuttal PDF for an example)
To get those results, we have to assume stationarity (which is arguably stronger than boundedness), which indeed does not hold in practice. However, given that we empirically found that our results almost perfectly hold when stationarity is not met, we believe that the benefits of this assumption largely outweigh the fact that it will never be exactly achieved in practice. We will include this discussion to the appendix in the next version of the paper. We hope that this argument convinces the reviewer of the interest of the stationarity assumption. | Summary: In this work, the authors present an analysis of Recurrent Neural Networks (RNNs), focusing on problems which impede optimization as the length of input sequences and model memory increase. The authors first focus on the well-studied problem of vanishing/exploding gradients, arguing that this problem by itself does not explain the difficulties in optimization. The authors turn to the less studied "curse of memory", and argue that this issue persists for RNNs even when gradients are controlled to not explode or vanish. Finally, the authors study how some of the alternative recurrent architectures such as LSTMs and SSMs mitigate the curse of memory, and provide some suggestions for how optimization can be improved.
Strengths: I am very intrigued by the approach taken in this work.
The analysis seems novel and the arguments provided are given in a straightforward and convincing manner. The theoretical and empirical results in the main body of the work all seem sound, and are augmented by substantial additional experiments in the appendices.
The authors' analysis results in clear explanations both for how current architectures mitigate the issues studied in this work, and the analysis suggests some directions that may lead to new architectural innovations in the future.
Despite many technical issues in the writing, the argument is presented in a clear way and the flow of the paper makes sense. See below.
Weaknesses: I have three major issues with the paper in its current form. If issue 1 is addressed, I am willing to increase the score of the presentation from fair to excellent, which would bring the paper to a borderline reject. Issues 2 and 3 are more relevant to the soundness of the work, and I've lowered the score to a 2. If the authors address these, I will consider raising the score; however, I can't go above a 3 without a more substantial discussion of limitations, which I think might not be feasible within the rebuttal period; however, I am still open to adjusting this score depending on the discussion.
1. One of the more glaring issues with this work is with the writing. The authors need to take some time to carefully check for typos, incomplete sentences, and correct grammar. I have listed a few particular issues here:
* line 30: remains -> remain
* line 69: You say here that you are showing the gradients can explode, but in the next section say you are controlling the gradients so that they do not explode (lines 83-86). Am I missing something conceptually here that makes this not an inconsistency?
* line 70: remains -> remain
* Figure 1: the last sentence of the caption is unclear and seems to be missing some words.
* line 86: should be "studying this behavior quantitatively", not the other way around
* line 192: alleviates -> alleviate
* line 283: characteristic -> are characteristic
if these issues are all fixed and the authors will take some time to carefully edit, I will update my score to reflect this effort. Overall the typos only affect clarity in a few spots.
2. In section 2.2 - you justify assumptions A) and B), but I couldn't find any justification of assumption C). Unless I am missing something, this needs to be provided, as assuming Wide-sense stationarity is not necessarily standard. At the very least, this assumption could be acknowledged as a limitation and room for future work.
3. There is no real discussion section at all, and there is particularly no discussion of limitations.
Technical Quality: 3
Clarity: 3
Questions for Authors: See above - why is the assumption of Wide sense stationarity needed, and how is your analysis affected by not adopting the assumption?
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: No limitations are presented or discussed in the paper as far as I could find.
I see no ethical issues with this work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate the reviewer’s positive feedback on our paper and address their concerns below.
1. **Typos**. We thank the reviewer for their detailed report of our typos. It will help us make the manuscript as clear as possible and without typos. Even though the system does not allow for the upload of a revised version of the paper at this stage, we already implemented all the corrections in our latex file and made several additional passes to make sure everything is in the best shape. That said, we are happy to provide clarifications for the typos that affect the understanding of our argument.
Regarding line 69: we mean that the gradients’ magnitude can explode when the memory increases (i.e., $\lambda \rightarrow 1$), even when we are outside the traditional exploding gradient regime (i.e., we enforce $\lambda \leq 1$). It should be noted that in the first case the gradients explode as the memory increases, whereas in the second case, they explode as the interval considered increases.
Regarding Figure 1: “as” is missing (”The loss becomes sharper as information is kept…”).
2. **Wide-sense stationarity assumption.** A discussion of this assumption was indeed missing in the paper. We address it in the global response. In short, this assumption is standard in the analysis of linear time filters (see e.g. the Proakis and Manolakis book) and is mostly for convenience in making calculations easier (see e.g. the classical Wiener–Khinchin theorem). When it is violated, our theory does not perfectly predict signal propagation, but our results hold qualitatively and almost quantitatively (c.f. Figures 1 and 3 in the rebuttal PDF, which test our theoretical predictions in a practically relevant setting). Overall, this assumption is not a significant limitation of our theory, and using the empirical autocorrelation function leads to very similar results (see Figure 3 of the rebuttal PDF, left column).
3. **Discussion of the limitations.** As we discussed in the global response, the limitations of our work are mostly due to the simplicity of our model and the nature of the approach we are following.
Regarding the model we study: While our results only apply to the non-gated case, we show that they still capture the gated case relatively accurately, despite the dynamics being significantly nonlinear (c.f. Figure 3 in the rebuttal PDF). However, there is little hope that our analysis extends to nonlinear vanilla RNNs.
Regarding what we can achieve with such an approach: Studying signal propagation has its limits regarding the insights it can bring. We can mainly study the loss landscape through such an approach, and cannot say anything about, e.g., the memory capabilities of a given architecture.
The reviewer's thoughtful comments have helped us clarify key aspects of our work. We are confident that these explanations address the concerns raised and provide a more comprehensive view of the contributions and limitations of our work. We hope that these clarifications reassure the reviewer to revise their score to better reflect their positive opinion of our paper.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for addressing my comments. After reviewing the authors' response and the remaining concerns of other reviewers I believe this work would benefit from substantial revision and resubmission at a later date. I have changed by score to a borderline reject per my initial review as the authors have adequately acknowledge the particular typos mentioned here.
The discussion of limitations provided in the rebuttal is interesting, and I agree with the feedback of other reviewers that this work is valuable; however, I think a more substantial revision which works a discussion of limitation and highlights the more particular contributions of this work would serve as a stronger submission in the future.
---
Reply to Comment 1.1.1:
Comment: We appreciate that the reviewer took the time to read our rebuttal and thank them for updating their score.
However, we have to admit we are not quite sure why the reviewer still believes the paper requires substantial revision. From their review, we had understood that clarifying our contributions (as we did in the rebuttal) at the end of the introduction, adding a discussion of the wide-sense stationarity in Section 2 and the discussion of the limitations of our work in a new paragraph of the conclusion would sufficiently address the concerns raised by the reviewer. Can the reviewer help us understand their viewpoint by clarifying what is missing in our rebuttal? We appreciate further feedback so that we can continue improving our paper. | Summary: The paper explores challenges RNNs face in learning long-term dependencies. While generally attributed to the exploding and vanishing gradients problem (EVGP), the authors reveal that as a network's memory increases, parameter changes cause an explosion of the second moment of hidden states and their gradients, making gradient-based learning highly ill-posed. They study different design choices in recurrent architectures such as element-wise recurrence, gating mechanisms and careful parameterization with respect to the curse of memory. They apply their theoretical findings to successful architectures such as state-space models (SSMs) and LSTMs, offering new insights into why some RNN architectures perform better in gradient-based learning.
Strengths: - The paper addresses a fundamental problem, the curse of memory, and connects it through theoretical analysis and experimental validation to the success of recent architectures, which makes the insights incredibly valuable to the machine learning community
- I like how the authors connect the dots between multiple facets of model training, e.g. investigating the implications of RNN recurrence parametrization on adaptive LR optimizers, which are actually used in practice
- The paper is also well structured and generally well written.
Weaknesses: - To add to EVGP mitigation techniques (l. 61 - 64): There also exist regularization techniques based on dynamical systems theory, see especially [1]
- There are quite some typos across the document, see the list below. I recommend running the paper through some sort of language checker to improve the manuscript from a readability and presentation side.
- The authors repeatedly call $\mathbb{E}[X^2]$ a variance (where $X$ here is a placeholder for any RV, i.e. $h_t$ and $d_\theta h_t$ in the paper), which is technically not correct. It is the second moment. I’d consider fixing this or explaining why the missing term for the variance, $\mathbb{E}[X]^2$ (first moment squared), can be neglected.
Typos / text-bugs:
- l. 119: “[...] a low a pass filtered [...]
- l. 191: “[...] does not hurts [...]”
- l. 192 “Several RNN architectures implicitly alleviates [...]”
- l. 200 “While such models can can approximate any smooth mappings [...]”
- l. 219 “anaylsis”
- l. 257 “Both network are trained [...]”
- word missing in l. 283 “[...] that characteristic of the curse of memory, [...]”?
- l. 298 consistently -> consistent
[1] Schmidt et al., Identifying nonlinear dynamical systems with multiple time scales and long-range dependencies (ICML 2021)
Technical Quality: 3
Clarity: 3
Questions for Authors: - in l. 180 you assume $\gamma$ is independent of $\lambda$ yet in the Diff. eq. in l. 182 you introduce the dependency again ($\gamma(\lambda)$)? Is this connected to what you write in the caption of Fig. 2 where you decouple $\gamma$ from $\lambda$ when differentiating? Can you explain the reason for this in more detail?
- in l. 289 you mention that you probe ADAM’s effective learning rate by providing a vector of ones to the optimizer - does that mean you simply read out ADAM’s (corrected) second-moment estimates and report $\frac{\eta}{\sqrt{v_n} + \epsilon}$, where $\eta$ is the global learning rate and $v_n$ the second moment estimates at iteration $n$? If so, at which iteration/epoch do you query the effective learning rate?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 4
Limitations: While an explicit section on limitations is missing, the authors mention their limitations and specific assumptions throughout the manuscript.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate the reviewer’s very positive feedback and their valuable input regarding these additional references and important details. We address their questions below.
> in l. 180 you assume $\gamma$ is independent of $\lambda$ yet in the Diff. eq. in l. 182 you introduce the dependency again ($\gamma(\lambda)$)? Is this connected to what you write in the caption of Fig. 2 where you decouple from $\lambda when differentiating? Can you explain the reason for this in more detail?
Indeed, $\gamma$ must be a function of $\lambda$ for the normalization to be effective. However, including $\lambda$ inside the normalization significantly complicates our analytical formula, making it harder to reason about them in the backward pass. This is why, for that section, we reason about $\gamma(\mathrm{stopgradient}(\lambda))$.
> does that mean you simply read out ADAM’s (corrected) second-moment estimates?
The quantity we report incorporates both the first-moment and second-moment estimates from Adam, not just the second-moment one.
> at which iteration/epoch do you query the effective learning rate?
In the plots presented, we query the effective learning rate at the very end of training, using a constant learning rate (in contrast to the cosine scheduler used in our other experiments). However, our experimental pipeline monitors the effective learning rate at every epoch, and we have observed that the trends remain consistent throughout training.
We recommend the reviewer to take a look at the new experiments we report in the global answer. They show that our theory extends to the gated case and to a larger class of practically relevant architectures. We hope the reviewer may take these additional results into account and consider adjusting their assessment accordingly.
---
Rebuttal Comment 1.1:
Title: Re: Rebuttal
Comment: I thank the authors for the clarifications, elaborate rebuttal and additional experiments in the general response. I think this work is of great value for the community. Hence, I increased my score to 7.
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer for their positive feedback on our work and for increasing their score! | Summary: The paper discusses "the curse of memory" which is the hypersensitivity of hidden states to parameters as the memory increases. This can lead to optimization issues, even if the exploding/vanishing gradient issue is addressed. The authors discuss solutions to this problem: complex diagonalization, normalization, and reparametrization. To this end, the authors derive closed-form solutions for the variance of the hidden state and its sensitivity to the parameters and discuss how the solutions can prevent the variances from diverging. This can give insights into why certain families of RNNs are performant. In particular, they show that both the variance of the hidden state and its derivative wrt to the parameters diverges as the memory goes to infinity (although at different rates). Therefore, normalizing the hidden state and reparametrizing the variable representing the memory's temporal scale can avoid the divergence issue.
Strengths: - The paper puts forward closed-form solutions for the variance of hidden state and the sensitivity of hidden state to parameters. This is interesting and can provide insights into RNN dynamics and possibly network design.
- The results and the discussion on adaptive learning rate and how it relates to the Hessian of fully connected and complex diagonal RNNs are interesting and, to my knowledge, novel.
Weaknesses: - The main weakness of the paper is the presentation. It is unclear what the contribution of papers is. At many points in the manuscript, it is unclear if the authors are discussing previous results on LRUs or putting forward new results. It would be good if the authors explicitly spell out their contributions.
- Although the paper is presented as discussing the curse of memory and the solutions to it, it reads more like "why LRUs are performant" The solution discussed: exponential reparametrization, normalization, and diagonalization, are all introduced and employed in LRU architecture [20]. However, the authors only discuss SSMs and gated RNNs as networks that implicitly address "the curse of memory". Meanwhile, LRUs employ the same solution discussed in the paper explicitly. Although LRUs can be thought of as a subclass of SSMs, vanilla SSMs [17] do not use the normalization, diagonalization, and exponential reparametrization discussed in the paper. Therefore, discussing LRUs only in experiments "to represent SSMs due to its simplicity" is not justified.
- If the solution to the curse of memory is not the contribution of the paper, then the novelty and contribution become limited. As I mentioned earlier, LRUs employ the same solution and sometimes with the same justification, for example, exponential parametrization is used for higher granularity around 1 [20].
- To motivate their results, the authors mention that "We answer negatively by showing that gradients can explode as the memory of the network increases, even when the dynamics of the network remains stable." but it is unclear where this is shown.
- minor comments:
L117: the paragraph on backward pass is confusing and hard to understand.
L139: check wording. Could not understand it.
Technical Quality: 3
Clarity: 2
Questions for Authors: - What are the main contributions of the paper? Which results and which sections?
- What are other types of normalization and reparametrization, other than the ones employed by LRUs, that can address the curse of memory?
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: NA
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their review and address their concerns below.
**On the contributions of the paper**
> What are the main contributions of the paper?
We provide a detailed list of our contributions in the global answer.
> If the solution to the curse of memory is not the contribution of the paper, then the novelty and contribution become limited.
In this paper, our objective is not to design a new architecture capable of avoiding the curse of memory — but instead to describe this issue thoroughly (it has never been reported before), and argue why recent architectures (e.g. SSMs) and classical ones (LSTMs, GRUs) have inner mechanisms alleviating the curse of memory. As we describe in the paper, the issue we study is deeply linked with the optimization of any recurrent model capturing long-range interactions, and translates into challenging loss landscape properties that we show can be alleviated by diagonality and reparametrization. Indeed, as the LRU paper (and others) shows, linear RNNs that are functionally equivalent (i.e. real dense linear RNNs, complex diagonal linear RNNs, reparametrized linear RNNs) have drastically different performances on tasks such as the ones within the long-range arena benchmark. Our paper dives deep into this intriguing finding and finds a direct effect of some of the commonly used tricks on the loss landscape.
**On the different architectures**
> What are other types of normalization and reparametrization, other than the ones employed by LRUs, that can address the curse of memory?
We decided to focus on the LRU due to its simplicity, but it is far from being the only architecture capable of addressing the curse of memory. As we argue in Section 3, the discretization of a continuous-time ODE used in SSMs (more detail on that in the next paragraph) and the input and forget gates of LSTMs / GRUs partly serve the same purpose.
> Meanwhile, LRUs employ the same solution discussed in the paper explicitly.
> vanilla SSMs [17] do not use the normalization, diagonalization, and exponential reparametrization discussed in the paper.
> Therefore, discussing LRUs only in experiments "to represent SSMs due to its simplicity" is not justified.
While the LRU paper discusses many of the techniques we study in our paper, their investigation is mainly empirical. The objective of our paper is instead to deepen our theoretical understanding of RNN architectures like LRUs, SSMs, as well as the classical LSTMs/GRUs. Our path is of course guided by the design of such architectures, but our findings do not simply revisit the relative papers: we go one step further and show how these models can partially solve the curse of memory issue through reparametrization, normalizations, and diagonal structure. The LRU is arguably the simplest model capable of alleviating the curse of memory. While we agree that the structure of SSMs such as S4 is quite different, it is already pointed out by the LRU authors themselves (see the last section in their paper), that many of the tricks employed in their design can be traced back to mechanisms used in S4: it uses a diagonal parametrization of the recurrence, delta serves as normalization, and ZOH discretization induces an exponential stable parametrization.
**On the writing**
> […] it is unclear where this is shown.
This sentence refers both to our analytical and empirical results. In our analytical results, we show that $\mathrm{d}_\theta h_t$, and therefore the loss gradient, grows to infinity as $\lambda$ goes to $1$ (”the memory increases”). Figure 5.B shows that the gradient of the loss explodes in practice (there is a typo in the legend of this figure: the quantity reported here is the gradient of the loss). In all cases, the dynamics of the network are stable as $\lambda \in [0, 1)$.
We are confident that our remarks address the reviewer's concerns and provide necessary clarification, and we remain available during the discussion period for any additional questions. We encourage the reviewer to examine the new results reported in our global answer, which substantially extend the scope of our work, and we hope that our rebuttal will prompt a favorable reassessment of our paper's contributions and impact.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for taking the time to respond to my comments and other reviewers comments. I will comment on the issues raised by myself here.
I appreciate the importance of the topic and the fact that the authors study the RNNs and LRUs from a more theoretical standpoint. My general concern is presentation and I believe that the paper will greatly benefit from major reorganization.
While the general solution to the curse of the memory is discussed, the specific solutions, i.e., the specific form of reparameterrization and specific form of normalization are the forms employed by LRUs. But the LRUs are not really discussed, or mentioned, in the preceding sections. I would suggest to either make the paper “a theoretical study of LRUs” or discuss more specific solutions, e.g., S4, and make it more general. The list of the contributions stated in the global rebuttal is a good starting point. I encourage the authors to resubmit the manuscript after reorganization and revision. Therefore, I will keep my rating as “Technically solid paper where reasons to reject outweigh reasons to accept.”
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer for their time and valuable input. Based on our understanding, the reviewer is suggesting two major revisions: first, make the paper less LRU-centric, and second, use the contribution statement from our rebuttal to reorganize the paper.
Regarding the first point, we agree that this criticism applies to the empirical part of the submitted paper, but we believe that Figure 3 from the PDF already addresses this concern. We additionally believe that focusing on one architecture in most of the paper, while discussing how our findings apply to other architecture (both theoretically and empirically), helps us in keeping the exposition simple.
As for the second point, our rebuttal’s contribution list follows the exact same structure the paper.
As a consequence, we are uncertain how to structure the paper differently. Could the reviewer provide more specific feedback on what we could improve or clarify what additional experiments/results they would like to see in our paper? | Rebuttal 1:
Rebuttal: We are grateful to the reviewers for their high-quality feedback and their positive comments about our paper. We answer the main points below and respond to each reviewer more specifically in corresponding threads.
**Contributions-limitations.** Many reviewers highlighted that the contributions and the limitations of the paper should be made clearer. We list them below:
- **Contributions**
- We identify, **for the first time**, a crucial issue arising in the training of recurrent neural networks. We study it analytically in great detail in a simplified, yet realistic, setting. We then highlight that **successful RNN architectures** all share a common feature: they **mitigate this issue**.
- We confirm that our theory accurately **captures learning dynamics in a controlled teacher-student regime**. We additionally show how our analysis enables understanding parts of the Hessian structure, thereby improving our understanding of the critical differences between parametrizations/architectures and how they **interplay with different optimizers**.
- Overall, our paper brings theoretical insights into the training of recurrent neural networks that are of **great practical relevance**. Those results are particularly important as the interest in RNNs has been rising in recent years and there is **still very little literature** on the topic.
- Reviewers uPdc and xzKN rightly pointed out that we should better position our paper in comparison to the LRU paper. The **key distinction lies in the question we study**: the LRU paper aims to extract **which** components are critical to the great performance of SSMs, whereas we aim to understand **why** (some of) these components are critical. Their paper is therefore mostly empirical while ours is theoretical. That said, our work **naturally builds on the LRU**: First, we found their architecture to be more amenable to theoretical analysis than SSMs. Second, we designed our teacher setup task to reproduce the qualitative findings of the LRU paper on a more sophisticated task (a discussion of this point in Sec. 4 is missing). Third, our Eq. 5 generalizes their Eq. 6 to more general input distributions.
- **Limitations**
- Our theoretical analysis makes three main assumptions that are reasonable, yet have some limitations.
1. The recurrence is **diagonal and time-invariant**. Our analysis **does not exactly capture signal propagation in the gated case**. However, we provide new results (see below + PDF) demonstrating that our theory is a good proxy for gated RNNs at initialization.
2. The sequences are **infinitely long.** In practical terms, our results only **apply whenever the sequence length is larger than the largest characteristic timescale of the network** (in our preliminary investigations we found x3 to be large enough).
3. Input sequences are **wide-sense stationary.** This is a very **standard assumption** in the analysis of linear time-invariant filters. Yet, many real-world processes are not stationary, e.g. the Wikipedia data we are considering in Sec. 5. We provide new results (c.f. next paragraph) that show that our **analysis is still accurate enough** when this assumption is not met.
- The technical tools we use for our theoretical analysis are rather elementary and there is little to no hope that they generalize to highly nonlinear dynamics.
- Signal propagation that is independent of the time scale considered is necessary for efficient gradient-based learning but not sufficient. There are many questions that we thus cannot answer with such an analysis. For example, our theory cannot give any insight into how much memory a model can store, or tell us how learned networks will generalize to longer sequences than the training ones.
**New results.** Following the reviewers’ questions regarding 1) whether our analysis holds when the inputs are not wide-sense stationary and 2) how much it holds for gated RNNs, we provide a refined empirical analysis of signal propagation in the setting of Sec. 5.
1. The Wikipedia dataset is not wide-sense stationary and of finite length so we cannot directly apply the results of our analysis. Instead, we compute the empirical auto-correlation function (PDF Fig. 1). We find that we can approximate it well with one i.i.d. component ($\rho = 0$) and a slowly decaying one ($\rho \approx 1$). This is the autocorrelation function we plug into our analytical formulas when needed.
2. In Sec. 3, we argued that gated RNNs can behave like the diagonal networks we study theoretically. We verify this empirically in Fig. 2 of the PDF by looking at the diagonal and non-diagonal components of the recurrent Jacobian $\mathrm{d}h_t/ \mathrm{d}h_0$ of a GRU. We find that the non-diagonal components are indeed negligible in standard initialization schemes (this breaks when we multiply the hidden to hidden connections by a factor ~3).
3. We study signal propagation in a simplified GRU network (same as in Chen et al. 2018) and compare it with our theory (slightly modified to take into account normalization). We find that our theory almost perfectly captures the time-invariant case and adequately captures the gated case.
We are confident that our rebuttal addresses the concerns raised by the reviewers and hope that they will consequently update their scores. Given that we cannot update the paper, we have integrated those remarks, as well as fixed typos, in what will be the next version of the paper.
Pdf: /pdf/1690f1c53f42c582ce54ecab04ff4ece90ccd442.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | Summary: The paper provides a theoretical and practical analysis on the difficulties of training recurrent neural networks. In particular, given the current rise of interest in leveraging recurrent mechanisms for long sequence processing -- due to novel architectural components and solutions (deep state-space models, linear/diagonal recurrences, exponential parametrizations), identifying the crucial components allowing to preserve long range information is a very important research avenue.
Strengths: The paper provides an interesting theoretical analysis that highlights which are the key components allowing modern solutions (Deep SSMs) and gated models (LSTMs/GRU) to achieve good performances. Moreover, it provides some proof-of-concept practical results that help to confirm the analysis. The paper is well written (apart from few typos, I suggest a proofreading).
Weaknesses: I have some concerns with some of the paper initial framing and setting (see the Questions section) and the experimental analysis, which has been carried on very synthetic tasks which I am not sure are capable to represent real learning scenarios, thus possibly hindering the analysis validity.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1) (Minor) While I understand that recent literature has completely overriden this notion, the term "state-space model" refers [1,2,3] to a very general broad concept which is common in control theory and dynamical systems -- which simply refers to a model equipped with state variables controlled by first-order differential equations or difference equations. Such definition covers the typical feedback connection of Recurrent Models. Thus, stating in the abstract and in the main text "state-space models, a subclass of RNNs" seem a bit too strong to me -- even if several recent works highlighted the connections. I would be completely fine with specifying "Deep" SSMs (just to highlight that the authors are referring to modern models) or "Structured" SSMs and to mention the suggested references, to better frame the paper to the readers.
2) In Eq. 1, the authors consider a model not equipped with an output mapping (that was instead considered in Eq. 10, i.e. y_t = Ch_t + Dx_t) -- the hidden state h_t is directly provided to the loss function. Why is that the case, and do the considered analysis still hold when considering such mapping (that is common in RNNs and SSMs)?
3) The authors refers to an Infinite time horizon/infinite sequences (lines 98/100) as the basis for their analysis. Practically, how long the learning process necessitates to be to the considered issue to emerge/is this something that also the experimental analysis considers? In such case, does this have relation with recent literature highlighting the difficulties of sequential models when dealing with Infinite length sequences [3, 4]? I would like a comment by the authors on this.
4) Line (40): the notation (x_t)_t is not very clear to me.
5) Could the authors give an intuition on lines (113/116)? Why an higher correlation in the input data should imply an higher variance in the model state (in my understanding, similar inputs imply similar loss function, thus smaller gradients)
6) In line 135, the authors use yet another form for the state update that do not consider the B linear mapping. Why is that the case?
7) The teacher-student task should be better described and justified. Why did the authors chose to tackle such task instead of other synthetic tasks (such as selective copy, induction heads etc. see recent surveys [3] for references on synthetic datasets ). And do the authors believe that the teacher-student task is general enough (i.e., points sampled from a normal distribution) to represent a setting where the state has to learn meaningful long range dependencies? Do the conclusions still hold when dealing with tasks requiring to store more informative data? I believe that experiments could benefit from experimentations on more "difficult" settings (selective copy, Long Range Arena) to confirm the theoretical considerations with real learning scenarios.
[1] Genshiro Kitagawa. A self-organizing state-space model. Journal of the American Statistical Association, pages 1203–1215, 1998. 4
[2] Anton Schafer and Hans Georg Zimmermann. Recurrent neural networks are universal approximators. In Proceedings of ICANN, pages 632–640. Springer, 2006.
[3] Tiezzi, Matteo, et al. "State-Space Modeling in Long Sequence Processing: A Survey on Recurrence in the Transformer Era." arXiv preprint arXiv:2406.09062 (2024).
[4] Tiezzi, Matteo, et al. "On the resurgence of recurrent models for long sequences: Survey and research opportunities in the transformer era." arXiv preprint arXiv:2402.08132 (2024).
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The paper does not discuss limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their encouraging review. We answer their questions below:
1. **SSM terminology.** We agree with the reviewer that the SSM term has been overloaded. This is the reason why we try to use the RNN terminology as much as possible. That said, we appreciate the suggestion of the reviewer and will change the terminology to “deep SSMs”.
2. **Missing B, C and D?** We focus on the recurrence because the $B$, $C$ and $D$ mappings (from the usual SSM notations) are feedforward layers and signal propagation has been extensively studied in those layers (e.g. Glorot and Bengio 2010).
3. **Infinite sequences.** In our preliminary investigations, we found that having time sequences longer than 3 times the characteristic time constant of the network ($1 / 1 - \lambda$) is enough to consider the sequences infinitely long. The issue we mention appears from the very first parameter update and therefore is not directly related, to the best of our understanding, to the issues reported in the review (online learning and catastrophic forgetting).
4. Would changing it to $(x_t)$ make it clearer? We want to emphasize that this is a sequence of numbers here.
5. > Could the authors give an intuition on lines (113/116)?
The point we are trying to make in these lines is the following: if we want to train a deep network, we need neural activity to stay of constant magnitude over the network depth (following the classical results on signal propagation in feedforward neural networks). It follows that a recurrent layer that blows up the magnitude of the input will hinder the learning of the rest of the network.
> Why an higher correlation in the input data should imply an higher variance in the model state
We do not have a proper intuition for that result. We would like to emphasize that correlation is here between time steps and not samples, which might break some intuition. Additionally, the intuition the reviewer has does not always apply: take a L2 loss, if the predictions are all the same (thus all similar) but extremely large, the gradients will also be large.
6. We do not use any $B$ mapping for the same reasons as described in 2.
7. **Justification for the teacher-student experiment.** That is indeed an important point that we didn’t comment on enough in the current version of the paper. We focused on this teacher-student task for 2 main reasons: First, we wanted to have an experiment in which we can easily control important details for signal propagation (e.g. magnitude of eigenvalues or the concentration of eigenvalues), which is almost impossible to do in other synthetic tasks. Second, we wanted to disambiguate signal propagation from other capabilities of architectures like memory. We believe that this setup is general enough for studying signal propagation as we can reproduce the same qualitative order between the architectures as in the LRU paper (on the LRA benchmark) and as our new analysis reveals that, up to a first approximation, inputs can be considered i.i.d. in our Wikipedia experiment (c.f. new results in the global answer).
We believe that our rebuttal answers the questions of the reviewer and hope that the reviewer may update their score accordingly. We remain available during the discussion period to clarify additional points.
---
Rebuttal Comment 1.1:
Comment: I acknowledge that I have read the reviews and rebuttals and I thank the authors for taking their time to answer my questions, clarifiying some aspects of the work. I still have some concerns regarding questions 2 and 6 (the assumptions by the authors on analyzing simplified versions of the models -- do the results still hold when considering models equipped with B,C,D?), which I believe are important and require a more in depth justification. I agree with other reviewers that the work is promising but requires some additional work to better organize the contribution (i.e., regarding my points: the aforementioned setting simplification on B,C,D mapping and the teacher-student experimental setting should be better framed in the context of the paper). Regardless, I will keep my positive score.
---
Reply to Comment 1.1.1:
Comment: We here provide additional details on the B, C and D mappings, in the hope that they will clarify the remaining concerns of the reviewer. These mappings operate token-wise whereas the recurrence mixes information between tokens. This fundamental difference justifies why B, C and D should be considered separately from the recurrence and similarly to the rest of the feedforward layers of the network. On the more technical side, integrating B and C in our theory only requires changing the quantities we give as input to our theory, e.g. we should consider the statistics of Bx instead of x ($C\delta$ instead of $\delta$ for the backward pass through the recurrence). Our theory does not aim to capture the role of $D$ given that it side steps the recurrence. To summarize, integrating these mapping does not affect our conclusions (Sections 4 and 5 confirm it empirically).
We believe that the concerns previously raised by the reviewer (as well as the ones mentioned by other reviewers) only require minor modifications to the corresponding sections of the manuscript (see also our answers to reviewers uPdc and Bq7Y). We would appreciate additional input from the reviewer to help us better identify which parts require substantial modifications before the next version of the paper. | null | null | null | null | null | null |
DCDepth: Progressive Monocular Depth Estimation in Discrete Cosine Domain | Accept (poster) | Summary: The paper presents a novel framework for the long-standing monocular depth estimation. The task is first formulated as a progressive regression in the discrete cosine domain. The authors propose two modules: the PPH module progressively estimates higher-frequency coefficients based on previous predictions, and the PFF module incorporates a DCT-based downsampling technique to mitigate information loss and ensure effective integration of multi-scale features.
Strengths: 1. The paper is well-organized and the idea is easy to understand.
2. The results of the method are presented clearly.
Weaknesses: 1. The authors claim that the global-to-local (coarse-to-fine) depth estimation is a contribution, but this idea is common and adopted by other works [1] [2]. \
[1] Liu C, Yang G, Zuo W, et al. Dpdformer: a coarse-to-fine model for monocular depth estimation[J]. ACM Transactions on Multimedia Computing, Communications and Applications, 2024, 20(5): 1-21.\
[2] Li Y, Luo F, Xiao C. Self-supervised coarse-to-fine monocular depth estimation using a lightweight attention module[J]. Computational Visual Media, 2022, 8(4): 631-647.
2. In Section 3.2, some descriptions are confusing. “This grouping strategy ensures that lower-frequency groups contain fewer components necessitating more prediction steps, while higher-frequency groups encompass a larger number of components requiring fewer steps”, but in Fig 2, local areas (higher-frequency) require more steps. It is recommended to explain the unclear representation.
3. The authors claim that the DCT-based downsampling technique tends to mitigate information loss, but this module has not been reasonably explained.
4. The experiments are somewhat lacking in terms of including the latest works that achieve state-of-the-art performance. Although the authors compare their results with some previous works, other MDE methods [3, 4, 5] could provide valuable additional comparisons. When evaluated against these newer methods, the proposed method does not demonstrate superior performance. \
[3] Ning J, Li C, Zhang Z, et al. All in tokens: Unifying output space of visual tasks via soft token[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision. 2023: 19900-19910. \
[4] Yang L, Kang B, Huang Z, et al. Depth anything: Unleashing the power of large-scale unlabeled data[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024: 10371-10381. \
[5] Saxena S, Kar A, Norouzi M, et al. Monocular depth estimation using diffusion models[J]. arXiv preprint arXiv:2302.14816, 2023.
5. “MDE is extensively applied across various fields such as autonomous driving, robotics, and 3D modeling [45, 48, 9, 42]” recommend changing to “autonomous driving[…], robotics[…], and 3D modeling[…]”.
Technical Quality: 3
Clarity: 2
Questions for Authors: See the Weaknesses.
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: The contributions are not substantial enough. The authors have not clearly articulated their design for DCT-based downsampling. Additionally, the proposed method does not achieve state-of-the-art results.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: 1. **Clarification about Contribution:** Thank you for your comment. We believe there may be a misunderstanding regarding our contributions for the following reasons:
1. As stated in lines 53-54, our main contribution is the first to formulate monocular depth estimation as a progressive regression task in the discrete cosine domain, achieving state-of-the-art performance. All other reviewers consider our idea a novel insight (Reviewer HHm4), quite interesting (Reviewer fZWS), and significantly different from previous methods (Reviewer tNUP).
2. The global-to-local or coarse-to-fine concept is broad, and researchers can achieve this in various ways. The two approaches you mentioned employ tandem networks to perform pixel-wise coarse depth estimation and final depth refinement in the spatial domain. In contrast, our method leverages discrete cosine transformation to segregate depth information into various frequency components and progressively predict them from low-frequency to high-frequency components based on an iterative machenism. Our approach is fundamentally different from the two approaches you mentioned.
3. Besides the novel formulation for monocular depth estimation, we also proposed two innovative modules: the progressive prediction head and the pyramid feature fusion module. We demonstrated their effectiveness through ablation studies.
2. **Clarification about Grouping Strategy:** Thank you for your comment. The lower-frequency groups contain fewer frequency components to be predicted, while the higher-frequency groups contain more frequency components. Each group of components is predicted through one iteration, thus the average number of prediction steps for higher-frequency components is less than that for lower-frequency components. Due to space limitations, we only report part of the evolution result in Figure 2. Please refer to Figure 1 of the attached PDF for a detailed illustration.
3. **Clarification about DCT-based Downsampling:** Thank you for your comment. We believe you may have missed some of the explanations about the DCT-based downsampling in our main text. In Section 3.1, we review the Discrete Cosine Transform (DCT) and introduce its energy compaction property. In lines 154-162, we elaborate on the workflow of the DCT-based downsampling strategy, explaining that the key information of feature maps is preserved during downsampling by leveraging the energy compaction property of DCT. Additionally, we illustrate the workflow of DCT-based downsampling in the bottom-left corner of Figure 3.
4. **Clarification about Experimental Comparison:** Thank you for your comment. Compared with our method, the approaches you mentioned have significant advantages in experimental configurations. We summarize them as follows:
| Method | Backbone | Pretraining | Traning Set Size |
| ------------------ | --------------- | ------------------------------------------------------------ | ------------------------------------------------------------ |
| AiT [3] | Swin-Large V2 | SimMIM | Same as Our |
| Depth Anything [4] | ViT-Large | DINO V2 | Over 63.5 Million |
| DepthGen [5] | Efficient U-Net | Image-to-image self-supervised pretraining & supervised image-to-depth pretraining | Supervised pretraining: 8.5 million & NYU-Depth-V2 finetuning: 50000 |
| Our | Swin-Large V1 | ImageNet-22k classification | NYU-Depth-V2: 24231 |
We did not include the comparison with these methods due to the large gap in experimental configurations. To further demonstrate the advancement of our method, we integrated our method into Depth Anything and finetuned it in NYU-Depth-V2 following the settings in the Depth Anything paper. We denote the fine-tuned Depth Anything model as *Depth Anything ft.*, and our fine-tuned method as *Our ft. DA*. We also compared our method with DepthGen on both NYU-Depth-V2 and KITTI Eigen datasets. The results are reported as follows:
| Method | Backbone | Abs Rel | RMSE | $log_{10}$ | $\delta<1.25$ | $\delta<1.25^2$ | $\delta<1.25^3$ |
| :----------------: | :-------------: | :-------: | :-------: | :--------: | :-----------: | :-------------: | :-------------: |
| Depth Anything ft. | ViT-Large | 0.056 | 0.206 | 0.024 | 0.984 | 0.998 | 1.000 |
| DepthGen | Efficient U-Net | 0.074 | 0.314 | 0.032 | 0.946 | 0.987 | 0.996 |
| Our | Swin-Large | 0.085 | 0.304 | 0.037 | 0.940 | 0.992 | 0.998 |
| Our ft. DA | ViT-Large | **0.055** | **0.204** | **0.024** | **0.985** | **0.998** | **1.000** |
| Method | Backbone | Abs Rel | RMSE | $RMSE_{log}$ | $\delta<1.25$ | $\delta<1.25^2$ | $\delta<1.25^3$ |
| :------: | :-------------: | :-------: | :-------: | :----------: | :-----------: | :-------------: | :-------------: |
| DepthGen | Efficient U-Net | 0.064 | 2.985 | 0.100 | 0.953 | 0.991 | 0.998 |
| Our | Swin-Large | **0.051** | **2.044** | **0.076** | **0.977** | **0.997** | **0.999** |
Our finetuned method outperforms the depth anything couterpart, which demonstrate the superiority of our method. Furthermore, our method significantly outperforms DepthGen on all metrics of KITTI Eigen and several metrics on NYU-Depth-V2, despite DepthGen employing a large amount of data for supervised pretraining.
5. **About References:** Thanks for your advice and we will revise our paper according to your suggestion.
---
Rebuttal Comment 1.1:
Comment: First, thanks for your effort in responding to my questions.
Then, I read all the responses, and the numerous experiments and detailed answers have convinced me that this paper has sufficient contributions. However, the author should provide limitations and corner cases.
Finally, I'd like to raise my rating.
---
Reply to Comment 1.1.1:
Comment: Thank you for your feedback and for improving the rating. Due to limited space, we discuss the limitations of our method in Section E of the supplementary material. We will move this discussion to the main text in the revised version to enhance the completeness of our paper.
Specifically, our model is supervised by comparing the difference between its predictions and the ground truth in the spatial domain. However, the sparsity of ground truth may inefficiently supervise the estimation of frequency coefficients. While we have evaluated our method on the KITTI dataset with sparse ground truth and achieved state-of-the-art performance, further exploration is needed to evaluate its performance on even sparser datasets. Please see Section E for detailed discussion. | Summary: The paper introduces a frequency domain-based method for monocular depth estimation. The proposed method begins with the prediction of low-frequency components to establish a global scene context, followed by successive refinement of local details through the prediction of higher-frequency components. The proposed method is validated on the NYU-Depth-V2, TOFDC and KITTI datasets.
Strengths: To the best of my knowledge, the discrete cosine domain-based progressive design is significantly different from previous methods and gives a promising paradigm in depth estimation.
The proposed method is interpretable, using discrete cosine transformation to segregate the depth information into various frequency components and enriching details in a progressive manner.
The proposed method outperforms prior work across multiple datasets with comparable or fewer parameters.
Detailed ablation study indicate the effectiveness of each key component.
The paper is clearly structured and well presented.
Weaknesses: As shown in Table 2, the proposed method significantly outperforms NewCRFs, PixelFormer, and IEBins on the TOFDC dataset compared to other datasets. It would be beneficial to provide a detailed discussion regarding this to better understand the strengths of the proposed method.
It is nice to also visualize the evolution of depth predictions in the frequency domain.
In Fig. 3, the three depth predictions at the bottom look the same. They probably need to be replaced with the actual experimental results.
Technical Quality: 4
Clarity: 4
Questions for Authors: Please see the weakness section.
Confidence: 4
Soundness: 4
Presentation: 4
Contribution: 3
Limitations: It is nice to try the completed ground-truth depth maps for supervision on the KITTI dataset, using existing depth completion frameworks.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: 1. **Discussion on TOFDC Result:** Thanks for your valuable suggestion. The TOFDC dataset is challenging due to its small number of training images with limited diversity. NewCRFs, PixelFormer, and IEBins independently predict pixel-wise depth without modeling the correlations among them. We believe this approach makes it difficult to learn general depth estimation knowledge from a small-scale dataset, causing these large models to overfit on the training set and resulting in degraded depth estimation performance. In contrast, models with smaller parameters, such as BTS and AdaBins, achieve better depth estimation results.
Unlike the large models mentioned above, our proposed method models local depth correlations by making patch-wise predictions in the frequency domain. This approach allows our model to better exploit general depth estimation knowledge. Additionally, we include two regularizations in the training loss to encourage the model to output smoother depths, helping to avoid overfitting and achieve better performance.
2. **Depth Prediction Evolution Visualization:** Thank you for your professional suggestion. We have included the visualization of frequency prediction evolution in Figure 1 of the attached PDF. We will incorporate this result into our revised paper.
3. **Depth Prediction Visualization in Figure 3:** Thank you for your suggestion. The depth results at the bottom of Figure 3 are produced by our model. However, they appear indistinguishable due to the small resolution. We will enhance the presentation in our revised version to ensure the differences are more visible.
4. **Supervision with Complete Depth Map:** Thank you for your constructive advice. We will explore using the dense depth map generated by the depth completion model as supervision to improve the performance of our method in future work.
---
Rebuttal Comment 1.1:
Comment: Thank you for your responses. The proposed method is promising and the authors have addressed my concerns.
---
Reply to Comment 1.1.1:
Comment: Thank you for your feedback and for recognizing our work. | Summary: This paper introduces DCDepth, a framework designed to tackle the long-standing monocular depth estimation task. In general, the proposed methods are quite interesting, the motivations for this work are clear, the experiments conducted are comprehensive, and the paper is well-structured.
Strengths: This paper introduces DCDepth, a framework designed to tackle the long-standing monocular depth estimation task. In general, the proposed methods are quite interesting, the motivations for this work are clear, the experiments conducted are comprehensive, and the paper is well-structured.
Weaknesses: 1. The comparisons with SoTA methods in Table 1 and Table 2 are not very clear. The authors should consider focusing on more decimal places and only highlighting the best-performing methods for each metric.
2. The tables should include the publication year and the venue of the compared methods, as this will help readers better understand the context of the comparisons.
Technical Quality: 3
Clarity: 3
Questions for Authors: Please see weaknesses.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: 1. **Table Result Presentation:** Thank you for your valuable feedback! Based on your suggestions, we plan to update our tables in the following two aspects:
1. Updating the accuracy metrics in Tables 1 and 2 to percentages and multiplying the error metrics in Table 1 by 100, keeping two decimal places for reporting quantitative results.
2. Highlighting only the best-performing methods for each metric.
Below are the updated comparison results of VA-DepthNet, which is closest to our method in performance on the NYU-Depth-V2 and TOFDC datasets:
| Method | Reference | Backbone | Abs Rel | Sq Rel | RMSE | $log_{10}$ | $\delta<1.25$ | $\delta<1.25^2$ | $\delta<1.25^3$ |
| :---------: | :-------: | :--------: | :------: | :------: | :-------: | :--------: | :-----------: | :-------------: | :-------------: |
| VA-DepthNet | ICLR-2023 | Swin-Large | 8.56 | 3.88 | 30.44 | 3.68 | 93.68 | 99.20 | **99.87** |
| Our | --- | Swin-Large | **8.51** | **3.87** | **30.43** | **3.66** | **93.95** | **99.20** | 99.84 |
| Method | Reference | Backbone | Abs Rel | Sq Rel | RMSE | $RMSE_{log}$ | $\delta<1.25$ | $\delta<1.25^2$ | $\delta<1.25^3$ |
| :---------: | :-------: | :--------: | :-------: | :-------: | :-------: | :----------: | :-----------: | :-------------: | :-------------: |
| VA-DepthNet | ICLR-2023 | Swin-Large | 0.234 | 0.029 | 0.619 | 0.373 | **99.550** | 99.890 | 99.969 |
| Our | --- | Swin-Large | **0.188** | **0.027** | **0.565** | **0.352** | 99.490 | **99.891** | **99.970** |
In the revised version, we will update our tables according to these principles.
2. **Including Publication Year and Venue Information for Comparison Methods:** Thank you for your valuable advice! We will include the publication year and venue information of the compared methods in our tables to help readers better understand the context of the comparisons.
---
Rebuttal 2:
Comment: Dear Reviewer fZWS,
Thanks for your comments to our paper. As the deadline for the discussion period is approaching, we wanted to kindly remind you that your feedback is very important to us. We have submitted our responses to your comments and would greatly appreciate any additional questions or feedback you may have.
Thank you for your time and consideration.
Best regards,
The Authors
---
Rebuttal Comment 2.1:
Comment: Firstly, thank the authors the details explanation. I will keep my original rating.
---
Reply to Comment 2.1.1:
Comment: Thank you for your feedback and support! | Summary: The author has proposed the DC depth, which aims to predict a depth map from a monocular image.
The authors introduce a novel technique that implements depth estimation of the frequency coefficients from the discrete cosine domain and enables modeling the local depth correlations.
The author conducted experiments on the NYU-Depth-V2, ToF-DC, and KITTI datasets.
Strengths: 1. Predicting depth from the frequency domain is a relatively novel insight.
2. The presentation is clear. Figure 1 clearly illustrates the progressive estimation scheme.
Weaknesses: Lack of comparison:
Why not compare with methods like Depth Anything [Yang et al., CVPR 2024], Metric3D[Yin et al., ICCV 2023], etc.? The methods currently compared are outdated. Therefore, it cannot be said that a new state-of-the-art performance has been achieved. L14
Technical Quality: 3
Clarity: 3
Questions for Authors: Please consider answering the questions in weaknesses section during the rebuttal.
Besides, on L27-28, the author states that the current depth estimation is based on a per-pixel basis. I am not sure if this statement is reasonable, as recent MDE methods, such as Depth Anything, they are based on transformers, and there are some operations of patch tokenization, perhaps the per-pixel basis is not a reasonable way of writing.
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: 1. **Clarification of Experimental Comparison:** Thank you for your comment. We would like to clarify the following points:
1. Our research goal is different from that of Depth Anything and Metric3D. We focus on the novel network and algorithm design for monocular depth estimation task, while Depth Anything and Metric3D focus on improving the generalization of the depth estimation model. This difference leads to different experimental setup for these two kinds of methods, which we summarize as follows:
| Method | Pretraining | Traning Set Size | Ratio to Our Training Set |
| :------------: | :----------: | :-----------------------------------------------------: | :-----------------------: |
| Depth Anything | DINO-V2 | Over 63.5 Million | 2621 / 6350 / 2742 |
| Metric 3D | ImageNet-22K | Over 8 Million | 330 / 800 / 345 |
| Our | ImageNet-22K | NYU-Depth-V2: 24231 / TOFDC: 10000 / KITTI Eigen: 23158 | 1 / 1 / 1 |
Due to the significant differences in the experimental setup, it is unfair to directly compare our method with these two methods, thus we do not include these comparison results in our paper.
2. We disagree with your comment that “the methods currently compared are outdated,” as we have included recent approaches such as IEBins [1], BinsFormer [2], and VA-DepthNet [3] in our comparison, as shown in Tables 1, 2, and 3 of the main text and Table 7 of the supplementary material.
3. Under the same experimental settings, our method surpasses existing methods on three datasets, achieving state-of-the-art performance. To further demonstrate the effectiveness of our method, we additionally designed the following comparative experiments:
- We use the weights of Depth Anything as initialization, transplant the proposed progressive prediction head into the network of Depth Anything, and fine-tune it on NYU-Depth-V2 following the settings in the Depth Anything paper. We denote the fine-tuned Depth Anything model as *Depth Anything ft.*, and our fine-tuned method as *Our ft. DA*.
- We use the weights from Metric3D encoder as initialization and randomly initialize the decoder. Then we fine-tune Metric3D and our model on the NYU-Depth-V2 dataset for 10 epochs, denoted as *Metric3D ft.* and *Our ft. M3D*, respectively.
The experimental results are reported as follows:
| Method | Reference | Backbone | Abs Rel | RMSE | $log_{10}$ | $\delta<1.25$ | $\delta<1.25^2$ | $\delta<1.25^3$ |
| :----------------: | :-------: | :------------: | :-------: | :-------: | :--------: | :-----------: | :-------------: | :-------------: |
| Depth Anything ft. | CVPR-2024 | ViT-Large | 0.056 | 0.206 | 0.024 | 0.984 | 0.998 | 1.000 |
| Metric3D ft. | ICCV-2023 | ConvNeXt-Large | 0.065 | 0.232 | 0.028 | 0.971 | 0.996 | 0.999 |
| Our ft. DA | --- | ViT-Large | **0.055** | **0.204** | **0.024** | **0.985** | **0.998** | **1.000** |
| Our ft. M3D | --- | ConvNeXt-Large | 0.062 | 0.229 | 0.027 | 0.970 | 0.996 | 0.999 |
Our finetuned methods outperform their counterparts, demonstrating the superiority of our approach.
[1] Shao et al. “IEBins: Iterative Elastic Bins for Monocular Depth Estimation", NIPS 2023.
[2] Li et al. "BinsFormer: Revisiting Adaptive Bins for Monocular Depth Estimation", TIP 2024.
[3] Liu et al. "VA-DepthNet: A Variational Approach to Single Image Depth Prediction", ICLR 2023.
2. **Clarification on Per-Pixel Basis Statement:** Thank you for your feedback. On lines 27-28, we stated that “to predict depth on a per-pixel basis within the spatial domain,” which indicates that the mentioned approaches independently predict depth for each pixel in the spatial domain. In contrast, our method models local depth correlations by predicting the frequency spectrum in the discrete cosine domain. The term “per-pixel basis” refers to the output, not the input. We will enhance the expression in the revised version to clarify this distinction.
---
Rebuttal Comment 1.1:
Title: Feedback to authors
Comment: Dear Reviewer HHm4 and fZWS,
You still need to share your feedabck about the rebuttal; please post your comments as soon as possible.
Thank you
---
Rebuttal 2:
Comment: Dear Reviewer HHm4,
Thanks for your comments to our paper. As the deadline for the discussion period is approaching, we wanted to kindly remind you that your feedback is very important to us. We have submitted our responses to your comments and would greatly appreciate any additional questions or feedback you may have.
Thank you for your time and consideration.
Best regards,
The Authors | Rebuttal 1:
Rebuttal: Dear Reviewers and Chairs,
Thank you for your constructive feedback and efforts.
We have individually responded to each reviewer’s comments. In the submitted PDF, we have visualized the prediction evolution of our method in both the spatial and frequency domains.
If you have any questions, please feel free to discuss them with us. If you are satisfied with our responses, we hope you will consider improving your rating.
Best regards,
The Authors
Pdf: /pdf/8f463e14962e0a82e5424a94baa9c3d205b2b07d.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Learning symmetries via weight-sharing with doubly stochastic tensors | Accept (poster) | Summary: In contrast to many works that impose strict architectural constraints to parameterize neural networks that are exactly equivariant to a known group, this work considers learning approximate equivariances to unknown groups. This is done using soft weight sharing schemes, where doubly stochastic tensors are learned and applied on canonical weights. The approach generalizes GCNNs, and is shown to learn natural symmetries in experiments.
Strengths: 1. Nice background section. In my opinion, it is written much better than other related papers in the area.
2. The method is a natural generalization of group convolutions, which is developed via the neat perspective of group convolutions as applications of transformations of a canonical weight tensor. It is nice that you can choose the number of "group elements", so that e.g. non-group-symmetries can be captured.
Weaknesses: 1. Section on "Regular representaitons allow for element-wise activations" is a little out of place and imcompletely justified. The claim that non-elementwise activations "in practice are not as effective as the classic element-wise activations" is a strong statement that needs more specific justification.
2. Direct parameterization of the kernels for each group element is expensive. Empirical runtime and memory analysis would be appreciated, to see the effect of this.
3. Empirical results are rather limited and weak. Few baselines are considered (see below as well), the baselines already perform well on these tasks, and WSCNN does not improve over the baselines in Table 1. Given that the method does not appear to be too scalable (see above), it is unclear where it would be useful in improving performance.
4. Other baselines (besides standard CNNs and GCNNs) and ablations are missing. For instance, one can imagine removing the sinkhorn projection onto the doubly stochastic matrices. If we initialize the $\Theta_i^l$ to be permutation matrices (your fixing of one representation to be identity is related to this), then this may also perform similarly.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. How do the learned filters of WSCNN look on CIFAR10?
2. Do learned representations ever look the same across different "group" elements?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: Some discussion in section 6
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Response:**
Dear reviewer, we thank you for your time and valuable comments, which we address in the following.
- On the use of element-wise activations: Regular group convolutions offer the benefit of having scalar feature field co-domains, permitting the use of any point-wise activation function without compromising equivariance [1, 2]. In contrast, steerable methods employ irreducible representations, which do not necessitate domain discretization [2]. However, these methods restrict the use of general point-wise activation functions, requiring specialized ones instead, which can be computationally expensive. This may additionally introduce discretization artifacts that disrupt exact equivariance, ultimately constraining the method’s expressivity [2].
- Regarding computational analysis, we have included such an analysis comparing our weight-sharing layer to a conventional group convolution of comparable size in the additional results, please consider point 5 in the general rebuttal, and Fig. 2 and 3 in the attached PDF.
- Regarding our empirical results and the usefulness of our method: we present additional experiments that demonstrate our method matches the performance of an unconstrained CNN with the same amount of effective but free kernels (128-channels) at a lower parameter budget (please also consider point 4 in the general rebuttal). In addition, we can outperform group convolutional methods for the discovery of partial and unknown symmetries. We also address this in points 1 and 2 of the general rebuttal, and Fig. 1a, 1b and 4 in the attached PDF. We hope this sufficiently addresses your concerns about experimental validation.
- If we initialize the weight-sharing schemes as permutation matrices implementing a specific group transformation, we indeed recover a standard group convolution. A fair comparison of baselines would be comparing to methods that use weight-sharing for images, such as reference [31]. However, they report difficulties with end-to-end training and require meta-learning, making a direct comparison with our approach less straightforward. Following your comments, we have added the row-normalized baseline from [30] on rotated MNIST in the additional results (please consider point 3 in the general rebuttal, and Table 2 in the attached PDF). Visual inspection of the weight-sharing schemes seem to imply it is less effective in picking up meaningful group structures.
Regarding your explicit questions:
- (Question 1) Regarding the learned representations in the CIFAR-10 experiments, we highlight that our model operates *without* applying specific algebraic or group constraints to the learned representations, resulting in a more abstract and adaptable weight-sharing scheme. As a result, the patterns in weight-sharing are not as readily interpretable through traditional group-theoretic approaches, and it also makes direct visual interpretation of these patterns more nuanced.
- (Question 2) While we do not put any explicit regularization on the diversity of the learned representations, we observed in our experiments that our method learns distinct weight-sharing patterns.
We thank you for your insightful comments and would appreciate you considering raising your score if you believe they have been addressed sufficiently, or follow up with any questions we may help clarify further.
**References**
[1] General E(2)-Equivariant Steerable CNNs. Weiler et al. NeurIPS 2019
[2] Fast, Expressive SE(n) Equivariant Networks through Weight-Sharing in Position-Orientation Space. Bekkers et al. ICLR 2024
[30] Equivariance Discovery by Learned Parameter-Sharing. Raymond A. Yeh, Yuan-Ting Hu, Mark Hasegawa-Johnson, Alexander Schwing. AISTATS 2022
[31] Meta-Learning Symmetries by Reparameterization. Allan Zhou, Tom Knowles and Chelsea Finn. ICLR 2021
---
Rebuttal Comment 1.1:
Comment: Hello authors. Thank you for the response, and apologies for my late response.
Your elaboration on (Question 1) is helpful (although it is clear from your paper, I forgot since I was thinking in terms of equivariant networks). I think that this, as well as your new ablations on less constraining of the learned representations, would be useful additions to your paper.
My remaining worries are more high-level, on the utility of the method. In my view, the utility of approximate equivariance or discovering symmetries is not adequately demonstrated or achieved with the given method. The experiments are only on augmentations of MNIST, or on non-SOTA regimes in CIFAR10. This could be fine, if the model were developed with methods that one could see being useful in different regimes in the future. However, I think the poor efficiency and scalability of the model severely harms this. On the axes of efficiency and accuracy, the model is not efficient, and does not have clear accuracy gains in interesting experimental areas. Perhaps this work would benefit from expriements in areas (say, in the physical sciences) that more clearly desire equivariance, where equivariant models are actually SOTA or near SOTA.
---
Reply to Comment 1.1.1:
Comment: Thank you for your response, and we are glad some of your concerns have been addressed. Regarding your remaining concern about state-of-the-art performance, it is important to highlight that the primary aim of our approach is in line with regular group convolutional methods, focusing on learning useful constrained functions rather than competing directly with SOTA image models. This distinction is crucial as it frames our method as a tool for understanding and utilizing inherent symmetries in data. Note that the method can easily be extended to any task where signals act on a discretized domain, such as volumetric grids. In these cases there is some evidence that having unconstrained models are more prone to overfitting [1] and our method's ability to inherently recognize and utilize symmetries can be particularly advantageous. For the sake of developing the method, we consider them out of scope for this work.
As such, in the current experiments we utilize MNIST and CIFAR-10 to demonstrate the main claims about symmetry discovery and feasibility. Additionally, while many CNNs rely on extensive data augmentation to achieve SOTA results, there is emerging evidence suggesting that such strategies might not always align well with real-world data distributions, potentially leading to suboptimal generalization [2, 3, 4, 5]. By contrast, our constrained approach offers a systematic way to learn and adapt to symmetries present in the data.
In direct comparisons, our model shows that it can match the performance of unconstrained CNNs, underlining its efficacy even without clear accuracy gains in the standard experimental setups used (see point 4 in the general response). In these cases the CNN models have a considerably larger number of trainable parameters, implying that our method may not need to operate at the same scale/channel capacity to achieve satisfactory performance. This underlines the method's potential utility in settings where understanding and incorporating data-driven symmetries are crucial. We believe that further development and application of this method in contexts where equivariance is highly valued will substantiate its utility and is left as future work.
We kindly encourage considering the broader context of our research’s objectives and its potential contributions to the field.
**References**
[1] Regular SE(3) Group Convolutions for Volumetric Medical Image Analysis. Thijs P. Kuipers and Erik J. Bekkers. MICCAI 2023
[2] Learning Equivariances and Partial Equivariances from Data. David W Romero, Suhas Lohit. NeurIPS 2022
[3] Learning Invariances in Neural Networks. Gregory Benton, Marc Finzi, Pavel Izmailov, Andrew Gordon Wilson. NeurIPS 2020
[4] Learning Layer-wise Equivariances Automatically using Gradients. Tycho F. A. van der Ouderaa and Alexander Immer and Mark van der Wilk. 2023
[5] Relaxing Equivariance Constraints with Non-stationary Continuous Filters. Tycho FA van der Ouderaa, David W Romero, Mark van der Wilk. NeurIPS 2022 | Summary: Overview:
This paper claims to build upon a long line prior work on symmetry detection and learning equivariant kernels. [18,19,20,22, 25-31. Eespecially reference [30] which is an identical weight and parameter sharing scheme that learns and discovers equivariances solely by row-stochastic entries. This paper's idea is to enforce both row and column stochasticity through use of the Sinkhorn operator to achieve equivariance discovery and applicable to more interesting data domains such as images.
Advantages:
* The paper addresses an important problem of Kernel equivariance via symmetry detection and weight in CNN . However its marginal progress over prior work.
Weaknesses:
* Comparisons with prior work are missing.
* The quantitative advantage of double stochasticity from row and column has advantages for metric spaces (images),. However this is not explained nor quantified through comparison.
* Moreover while ground truth tests are done for some toy problems, the quantitative generalizability of the double stochasticity over single stochasticity are never delineated, nor proved.
Questions:
This paper is identical to this ICML 2024 workshop paper. https://scholar.google.com/citations?view_op=view_citation&hl=en&user=gfRkDXEAAAAJ&citation_for_view=gfRkDXEAAAAJ:ufrVoPGSRksC ?
Also why no comparisons to even reference [30] ?
Missing References:
* Several references are missing, including the entire suite of probabilistic symmetry detection and equivariant NN is not discussed. See for e.g. @misc{bloemreddy2020probabilisticsymmetriesinvariantneural,
title={Probabilistic symmetries and invariant neural networks},
author={Benjamin Bloem-Reddy and Yee Whye Teh},
year={2020},
eprint={1901.06082},
archivePrefix={arXiv},
primaryClass={stat.ML},
url={https://arxiv.org/abs/1901.06082},
}
Strengths: This paper enforces both row and column stochasticity through use of the Sinkhorn operator to achieve equivariance discovery. However it's not clear what quantitative benefits this paper's method makes.
Weaknesses: Weaknesses:
* Comparisons with prior work are missing.
* The quantitative advantage of double stochasticity from row and column has advantages for metric spaces (images),. However this is not explained nor quantified through comparison.
* Moreover while ground truth tests are done for some toy problems, the quantitative generalizability of the double stochasticity over single stochasticity are never delineated, nor proved.
Questions:
This paper is identical to this ICML 2024 workshop paper. https://scholar.google.com/citations?view_op=view_citation&hl=en&user=gfRkDXEAAAAJ&citation_for_view=gfRkDXEAAAAJ:ufrVoPGSRksC ?
Also why no comparisons to even reference [30] ?
Missing References:
* Several references are missing, including the entire suite of probabilistic symmetry detection and equivariant NN is not discussed. See for e.g. @misc{bloemreddy2020probabilisticsymmetriesinvariantneural,
title={Probabilistic symmetries and invariant neural networks},
author={Benjamin Bloem-Reddy and Yee Whye Teh},
year={2020},
eprint={1901.06082},
archivePrefix={arXiv},
primaryClass={stat.ML},
url={https://arxiv.org/abs/1901.06082},
}
Technical Quality: 3
Clarity: 2
Questions for Authors: Questions:
This paper is identical to this ICML 2024 workshop paper. https://scholar.google.com/citations?view_op=view_citation&hl=en&user=gfRkDXEAAAAJ&citation_for_view=gfRkDXEAAAAJ:ufrVoPGSRksC ? Is this ok ??
Also why no comparisons to even reference [30] ?
Missing References:
* Several references are missing, including the entire suite of probabilistic symmetry detection and equivariant NN is not discussed. See for e.g. @misc{bloemreddy2020probabilisticsymmetriesinvariantneural,
title={Probabilistic symmetries and invariant neural networks},
author={Benjamin Bloem-Reddy and Yee Whye Teh},
year={2020},
eprint={1901.06082},
archivePrefix={arXiv},
primaryClass={stat.ML},
url={https://arxiv.org/abs/1901.06082},
}
Confidence: 5
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: There are no attempts on analyzing the generalizability of the operator to varying initial and boundary conditions or Reynold numbers.
Other potentially relevant paper to consider:
https://www.sciencedirect.com/science/article/pii/S0021999123001997
This paper uses error-correction in neural operators based on the residual.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear reviewer, we thank you for your time and comments, which we address in the following.
Regarding the motivation for double stochasticity and its comparison to prior works, we point out that there are key benefits about employing double stochasticity over single stochasticity:
- Theoretical motivation: We establish a link between regular representations which forms the basis of group convolution actions, and their manifestation as soft permutation matrices. Using double stochastic matrices allows for an analysis of learned representations through the lens of partial equivariance, a connection we elaborate on in Section 4.2. We are happy to motivate this in more detail in the camera-ready version.
- Practical Implications: As outlined in Section 4.2, our methodology employs doubly stochastic matrices for weight-sharing, which facilitates direct calculation of expectations from the implicitly learned distribution. This setup avoids the need for Monte Carlo approximations and is adaptable to unrecognized symmetries that may not conform to standard group structures. This approach to handling partial equivariance offers a novel angle compared to traditional symmetry discovery methods, such as those proposed by Yeh et al. [30] and Zhou et al. [31].
Hence, we propose that there is a well-grounded preference for employing double stochasticity over single stochasticity in the context of group convolutions. Following your comments, we have added the row-normalized baseline from [30] on rotated MNIST in the additional results (please consider point 3 in the general rebuttal, and Table 2 in the attached PDF).
- As per author guidelines on dual submissions (section on Dual Submission, [**https://neurips.cc/Conferences/2024/CallForPapers**](https://neurips.cc/Conferences/2024/CallForPapers)) it states that *“Papers previously presented at workshops are permitted, so long as they did not appear in a conference proceedings (e.g., CVPR proceedings), a journal or a book.”* The work referenced in your review was submitted as an extended abstract at the ICML workshop “Geometry-grounded Representation Learning and Generative Modeling” (https://gram-workshop.github.io/cfp.html) which is **non-archival, and hence does not violate dual submission policy.**
- Regarding the stated limitation, we appreciate the remark but do not focus on initial and boundary conditions as we do not use PDEs as a use case for our method in this work. If we were to extend our proposed method to PDE solving, we would require updating our architecture to accommodate boundary conditions and initial conditions to solve any PDEs efficiently. As far as the scope of this current work is concerned, we *learn* the symmetries or partial symmetries directly from the data through weight-sharing schemes, which do not require explicitly defining any boundaries.
We thank you for your insightful comments and would appreciate you considering raising your score if you believe they have been addressed sufficiently, or follow up with any questions we may help clarify further.
**References**
[30] Equivariance Discovery by Learned Parameter-Sharing. Raymond A. Yeh, Yuan-Ting Hu, Mark Hasegawa-Johnson, Alexander Schwing. AISTATS 2022
[31] Meta-Learning Symmetries by Reparameterization. Allan Zhou, Tom Knowles and Chelsea Finn. ICLR 2021
---
Rebuttal 2:
Comment: Dear reviewer uLBU, we believe to have addressed any weaknesses raised in your original review, an acknowledgment or response to our rebuttal would be much appreciated. We are very much open to address any concerns in the remaining discussion period. | Summary: The paper proposes a symmetry discovery through learning parameter-sharing in weight matrices. The parameterization relies on relaxing underlying permutation matrices by transforming them as doubly stochastic matrices. In combination with additional regularization, the parameterization can be used to successfully discovery symmetries in data, improving generalization performance.
Strengths: The method appears to be very elegant in offering a natural relaxation of underlying weight-sharing scheme through Sinkhorn operator. The paper is well-written and provides nice illustrations which guide the reader in understanding the proposed method. Then, the paper provides thorough empirical validation demonstrating usefulness of the approach in practice.
Weaknesses: -Regularization.
The paper proposes a novel parameterization that can be used for symmetry discovery. In terms of objective, the work relies on direct regularization to avoid equivariants solutions, similar to some other symmetry discovery works. It has been shown in prior work that this strategy can have issues, since it introduces an additional hyperparameter which may need additional tuning (thereby is less ‘automatic’). In terms of explaining the methodology, the paper would benefit from some discussion on the role of the used regularizer.
-Scalability.
The paper seems to scale quadratically in |X|. Is this not an issue? How does this compare against scaling of alternative methods?
Technical Quality: 3
Clarity: 3
Questions for Authors: -Regularization
For experiments, it would be helpful if it is clear how the strength of this regularizer is chosen. Cross-validation?
-Analysis of learned symmetries
It would be interesting to better understand what weight-sharing is being learned for CIFAR-10, apart from merely measuring improved test accuracy. Is there an analysis on the learned symmetry?
-Regularization
Entropy regularization seems to hurt on CIFAR-10, but improve on MNIST experiments. Do authors have an intuition on why this is the case?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 4
Limitations: The method proposes an elegant and novel symmetry discovery method. I expect the proposed parameterization to be beneficial to the community. The paper could improve a bit on the objective function side of things (seems to rely on directly encouraging symmetry, like some prior work). Apart from the remaining questions, the paper makes for a strong contribution.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear reviewer, we thank you for your time and valuable comments, which we address in the following.
Regarding employed regularizers, we acknowledge that the motivation could have been more explicitly mentioned, and we provide some added details:
- Entropy Regularizer: The primary motivation for using an entropy regularizer is to encourage sparsity in our weight-sharing schemes, which helps the matrices approximate true permutation matrices rather than “simply” satisfying double stochasticity. This approach stems from our initial intuition that weight-sharing schemes should mimic soft permutation matrices. The effectiveness of the induced sparsity depends on the specific group transformations relevant to the task. For example, $C_4$ rotations are typically represented by exact permutation matrices. In contrast, $C_N$ rotations or scale transformations might require interpolation, thus aligning more closely with soft permutation matrices. Our experimental results indicate that the utility of this regularizer varies with the underlying transformations in the data. For instance, as anticipated it is of lesser use for scale transformations in the MNIST dataset.
- Normalization Regularizer: Empirically, we have found the normalization regularizer essential for reducing the number of iterations needed by the Sinkhorn operator to ensure the matrices are row and column-normalized. Without this regularizer, the tensors either fail to achieve double stochasticity or require an excessively high number of Sinkhorn iterations to do so.
We will outline our motivation behind these two regularizers and the impact of varying $\lambda$ values for the entropy regularizer more explicitly in the camera-ready version.
- Regarding concerns about scalability: while we agree that quadratic scaling in $|X|$ can be impractical, potentially leading to polynomial $O(s^2)$ scaling in terms of image width $s$, our method exploits the fact that convolutional kernels have limited localized supports in practice. Thus, for a kernel size $k$ we scale with $|X| = k^2$ rather than the entire signal domain. This approach significantly reduces computational demands and makes our approach practically scalable. Please also consider point 5 in the general rebuttal for this aspect.
- Regarding the interpretation of the weight-sharing patterns learned on CIFAR-10, it is important to note that our model does not impose algebraic or fixed group constraints on the learned representations. This design choice enhances the flexibility and abstraction of the weight-sharing schemes. As a result, the weight-sharing patterns may not be easily interpreted using conventional group-theoretic approaches, which could affect the direct visualization clarity of the learned weight-sharing.
We thank you for your insightful comments and would appreciate you considering raising your score if you believe they have been addressed sufficiently, or follow up with any questions we may help clarify further.
---
Rebuttal 2:
Comment: Dear reviewer 3Y5V, we believe to have addressed any weaknesses raised in your original review, an acknowledgment or response to our rebuttal would be much appreciated. We are very much open to address any concerns in the remaining discussion period.
---
Rebuttal Comment 2.1:
Comment: I appreciate the author's further explanations and comments. I remain my score of a 7 and recommend acceptance, as I deem this a technically solid paper that will have high impact in the community. This is conditional on author's including some more discussion on the use of regularization and the computational concerns, as well as the issues raised by other reviewers.
---
Reply to Comment 2.1.1:
Comment: We thank you for your response. We are glad you appreciated the work and will incorporate the necessary alterations. | Summary: This paper introduces a parameterisation that contains the ability to represent weight tying corresponding to arbitrary (?) group equivariances. In practice it can represent interpolations of weight tying, but it is argued that this is a feature (not a bug!) since strict equivariance is often too strong a constraint to place on a model for it to still fit the training data.
Strict equivariance is known to be represented by a permutation matrix (correct me if I'm wrong), which justifies the need to use doubly stochastic matrices to interpolate. This makes the edges of the parametisation strict equivariances. (For equivariances of discretised continuous signals this is not quite true, as the permutation in continuous space would need to be approximated by some interpolation in the discrete space)
This leads to a new parameterisation of a weight structure in a layer that can be trained in the usual way. The doubly-stochastic matrices can then be investigated to see if equivariance is actually learned.
The experiments implement the method, and run on benchmark datasets and synthetic datasets, showing good performance, and somewhat interpretable group structure appearing. It is unclear what the actual point of the experiments is, since there are many reasons to use equivariance, but the experiments are not phrased in terms of this (see discussion).
Strengths: The problem of learning equivariances is very important, as it would remove a significant difficulty in designing networks with the correct inductive biases. The solution is flexible, as any (?) group structure can be represented by the parameterisation. There is also an elegant solution to the problem of needing a large number of parameters, that will work in practice for image data: Assuming translational equivariance, and only parameterising additional equivariances on the filters that are much smaller than the image.
Weaknesses: Overall, the paper presents a well-reasoned method to an important problem, and I do believe that it meets the standard for publication at NeurIPS.
**Method & Presentation**
The method is well-justified. However, a final summary of what a forward pass through a layer looks like was not given, and would be really helpful. In addition, it would be helpful to have a clearer discussion of how many additional parameters are added (beyond lines 220 onwards), with the architectures that are discussed given as an explicit example.
Essentially, one thing which seems to be the case, but is not explicitly acknowledged, is that this method collapses to just a special parameterisation of weight matrices, where the weights have low-rank combined with doubly-stochastic structure. The low-rank-ness is shown in eq 6. While this is a simplistic way of looking at the method, it does give a helpful alternative view. Making this explicit would help the paper.
**What is the claim of the experiments?**
The experiments are the main weakness of the paper. Some qualitative results about the structure of the learned weights are given, which are helpful. But it is not clear what the quantitative claim of the experiments is. Equivariance can help in several ways, e.g. better out-of-distribution prediction, better prediction at low data, or smaller/compacter models. So is the claim that the equivariance inductive bias helps, and it can be discovered automatically? But in this case this is not disentangled from the model capacity. Perhaps a normal CNN would perform better if it were just made larger! This is additionally indicated that the baseline 70% accuracy on CIFAR 10 is low compared to what other non-group-equivariant methods can achieve. This unclarity also exists in the synthetic experiments, where the size of the dataset is not discussed. Low-data experiments could help here, since it's easier to make the model large enough that size doesn't help any longer, which isolates inductive bias only.
The CIFAR experiments show that making the model larger improves performance. How can we be sure that this is really the benefit of learning equivariances, rather than adding more capacity? Another experiment that is necessary here, is a comparison to a weight structure that does not have the doubly-stochastic constraint enforced on it. This would allow the effect of simply adding capacity to be tested.
Alternatively, a low-data experiment would allow the generalisation capabilities of the model to be tested (e.g. [Invariance Learning in Deep Neural Networks with Differentiable Laplace Approximations](fig 3 in https://proceedings.neurips.cc/paper_files/paper/2022/file/50d005f92a6c5c9646db4b761da676ba-Paper-Conference.pdf)). This could be done on MNIST variants, where currently the differences are so small, it is hard to draw conclusions.
This same issue pops up in the synthetic experiments. What is the dataset size? If the dataset is so large that all transformed signals are in the dataset, even a fully-connected network would learn the correct function. A low-data experiment is needed to really show that truly an equivariance has been learned that can help to _generalise_. Alternatively, you need to argue a benefit on the basis of parameter count.
In summary: The claims that the experiment section support are not clear. The field of equivariances is mature enough that the potential benefits have been clearly described, and these need to be clearly evaluated in experimental sections.
**Related Work**
The idea of relaxing equivariance by placing a distribution over transformations is older than the papers currently cited. E.g.:
- [Local Group Invariant Representations via Orbit Embeddings](http://proceedings.mlr.press/v54/raj17a/raj17a.pdf)
The discussion of "symmetry discovery methods" is unclear. What data do these methods need, or what kind of training signal do they use? What kind of predictive improvements do they obtain? Is the goal of these papers the same as those in the previous paragraph? Or, is the way that these methods learn group structure different from those in the previous section? If so, how?
Methods that learn a degree of equivariance on a layer-by-layer basis are relatively new, and it would be good to discuss this explicitly. E.g.:
- [Residual Pathway Priors for Soft Equivariance Constraints](https://proceedings.neurips.cc/paper/2021/file/fc394e9935fbd62c8aedc372464e1965-Paper.pdf) Finzi et al is not mentioned at all, but does allow partial equivariance in a layer-by-layer way.
- Reference [26] "Learning layer-wise equivariances automatically using gradients" is cited, but only in the context of arguing that overly constrained models suffer poor performance, but not in the context that this paper also discusses how to learn the right equivariance to use. This author has more papers on learning invariance/equivariances that may be relevant.
Overall, the literature review misses a lot of relevant work. The suggestions I gave are off the top of my head and I definitely missed some important papers too. However, it is the responsibility of the authors to put the time and effort into going beyond this to give a more thorough overview.
**Minor**
- _"Requiring no prior knowledge of the possible symmetries."_ (line 52) It is true that earlier methods (with the exception of [31]) could only pick between groups that were completely specified a-priori. While this paper _in principle_ does provide a parameterisation that can _search_ over a much wider space, this space needs to be limited for scalability reasons, and it was not demonstrated that the method would work reliably without this "prior" being added!
- It would be really helpful to have a full discussion of the impact on the number of parameters, for these specific experiments.Needs a discussion of the total number of parameters
- Is there a typo in params in table 1? Under "Params" should "103 + 265K" be "103K + 256K"?
Technical Quality: 2
Clarity: 2
Questions for Authors: - Can this parameterisation represent arbitrary group equivariances? Is the representation of every strict equivariance a permutation matrix? It would be helpful to be explicit about how general this really is.
- Am I right in understanding that this method ultimately just parameterises a low-rank weight matrix over different feature channels?
- Can the benefit that equivariance promises in the low-data regime still be provided by this method when the invariance is learned? Since effectively, you're just parameterising weights in a different way (low rank?). In rotationally equivariant settings in low-data, could these weights not just overfit, rather than learning to rotate filters? Would this not lose an important benefit of equivariance?
- How important is the doubly-stochastic nature of things? Could you just run an experiment without the Sinkhorn component at all?
- Can you give a very short (ideally 1 sentence, or a 2-3 sentences) summary of the quantitative claims that are made about the method, that are verified in the experiment section?
Confidence: 4
Soundness: 2
Presentation: 2
Contribution: 3
Limitations: See above.
Overall, this is a really interesting idea. I do have some concerns about the evaluation. These concerns are large in the scheme of determining how well this method really works relative to clearly formulated claims, but small relative to typical approaches in the ML community.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Response:**
Dear reviewer, we thank you for your constructive feedback and engaged questions, which we address in the following.
- We thank the reviewer for pointing out the connection of our proposed method to low-rank approaches. It is possible to make our doubly stochastic matrices converge to permutation matrices with an appropriate regularization technique. In those cases we obtain sparse matrices that could be represented via low-rank approximation methods. However, we note that the degree of regularization is task- and domain-specific.
- Regarding experimental claims, our claim is indeed that the inductive bias introduced by equivariance is beneficial, and can furthermore be discovered automatically [Ref Fig 1 a,b and Fig 4 in the PDF] For natural images, the bias of translation equivariance may be sufficient in many cases, in particular when enough channels are present. This is not the case for other datasets, where the data itself may contain group symmetries beyond translation, as shown by [20, 27]. Our main claim is the ability to uncover such symmetry groups and employ them in different downstream tasks. While a look-up table may match the existence of an *a priori* specified symmetry group, this is not the case for partial symmetries or incomplete ones. In such cases, we demonstrate that we can discover the symmetry in the sense of a useable shared representation without requiring to point out the exact subgroup (if such a subgroup even exists).
- We acknowledge that conventional CNNs with an equivalent channel count inherently possess a higher expressive capacity due to a lack of kernel constraints. Therefore, we have opted to compare overall parameter counts rather than the number of kernels, a more comprehensive assessment in our opinion. For completeness, we have additionally measured the performance for an unconstrained 128-channel CNN (please consider point 4 in the general rebuttal).
- We thank the reviewer for the suggestion on experimenting with the low-data regime. We agree that the size of the dataset plays an important role, and we have added the suggested experiments for both cases of full and partial symmetry discovery (please consider point 2 in the general rebuttal and Fig. 2 in the attached PDF).
- Regarding additional literature review, we will expand the related works section and further clarify the position of our work within the literature for the camera-ready version. We would like to stress that we do *not* explicitly impose any group structure in our representation stack, since that would defeat our goal of symmetry *learning* and enables us to potentially learn a weight sharing scheme that violates some of the group axioms if the group symmetry is not present in the data. Therefore, we refrain from making any theoretical assumptions that make use of more complex statistical group theory results. We agree that we could have been more precise regarding these claims in the paper, and will clarify this. Our method primarily aims to learn symmetries from data explicitly through a weight sharing scheme which *may or may not* be an exact group symmetric structure (if the data does not possess a full group structure). Then, we implicitly assign several probability measures of finite support over this learned symmetry scheme. We emphasize that our method is therefore more flexible than a strict group equivariant network due to its ability to learn partial symmetries.
Regarding your explicit questions:
- (Question 1) Yes, this parameterization can represent arbitrary group equivariances for compact Lie groups. For strict or partial equivariance, we employ a permutation matrix.
- (Question 2) We hope that our first point in the response above addressed this question.
- (Question 3) In the low-data regime, the model might overfit to a limited set of group transformations. This can be mitigated with a proper regularizer. We would argue that this is an advantage for learning partial or incomplete symmetries, as we make no assumptions on completeness or uniform distribution over group transformation in the data. In those cases, strict equivariance can hurt performance and may also be an overconfident bias lacking the data evidence to support it. On the other hand, our method allows us to address this issue by learning directly from any symmetries that are present in the data, without the need of such assumptions on, e.g., completeness of a group structure or uniform distribution over group transformation.
- (Question 4) Zhou et al. [31] essentially learn a parameter sharing scheme without constraints on the sharing pattern except norm regularization. However, they report that they require meta-learning to pick up on symmetries, while our method does not. We presume that the added constraint on double stochasticity is beneficial in that regard.
- (Question 5) To briefly summarize in two points: (1) The model can recover group convolutional structures when clear symmetries, including partial symmetries, are present in the data (see our experiments on Rotated MNIST); and (2) The model can pick up on useful weight sharing patterns when there are no *a priori* known symmetry structures. This is shown in our CIFAR-10 experiments, where we match the CNN’s model performance and outperform a model with pre-specified group equivariance. This is additionally underlined in Point 1, 2 and 4 in the general response.
We thank you for your insightful comments and would appreciate you considering raising your score if you believe they have been addressed sufficiently, or follow up with any questions we may help clarify further.
**References**
[20] Learning equivariances and partial equivariances from data. David W. Romero and Suhas Lohit. NeurIPS 2022
[27] Approximately equivariant networks for imperfectly symmetric dynamics. Rui Wang, Robin Walters, Rose Yu. ICML 2022
[31] Meta-Learning Symmetries by Reparameterization. Allan Zhou, Tom Knowles and Chelsea Finn. ICLR 2021
---
Rebuttal 2:
Comment: Dear reviewer 1myY, we believe to have addressed any weaknesses raised in your original review, an acknowledgment or response to our rebuttal would be much appreciated. We are very much open to address any concerns in the remaining discussion period.
---
Rebuttal 3:
Comment: > We acknowledge that conventional CNNs with an equivalent channel count inherently possess a higher expressive capacity due to a lack of kernel constraints. Therefore, we have opted to compare overall parameter counts rather than the number of kernels, a more comprehensive assessment in our opinion. For completeness, we have additionally measured the performance for an unconstrained 128-channel CNN (please consider point 4 in the general rebuttal).
This is a really helpful addition. Interesting, so the main claim now is that for a parameter-constrained model, you obtain better performance? Size of the model clearly does come into this, as the CNN also gets better as more filters are added. This is fine, and a clear claim. However this does mean that the current experiments only show that the invariance learning properties work in the _underfitting_ regime! This is different from what is wanted from learning equivariances, where we may want to show the ability to obtain improved performance, once other simpler methods like making the model larger stop working.
This is why other methods e.g. [31, 26] consider second order information (meta-learning, marginal likelihood approximations): to distinguish between inductive biases even when the training loss cannot. This was also discussed in [*] and back in 2018 [**]. This is what the low data experiment could have shown, if it was verified that the model was large enough to sufficiently fit the training data. Also the low-data experiment is of limited usefulness, since it does not contain a baseline of a non-invariant model. The interesting question here is to see how much of an improvement you can get by learning invariances over a model that cannot do this. (Also relevant to your answer to my question 4.)
> We would like to stress that we do not explicitly impose any group structure in our representation stack
This was clear from the paper, and is certainly valuable! However, I don't see how this makes the literature I pointed to any less relevant?
## Overall
Either way, while I don't think this is the end of the question of how to learn equivariance automatically, I do believe it is an interesting paper, and I will continue to argue for acceptance.
[*] [Invariance Learning in Deep Neural Networks with Differentiable Laplace Approximations](https://proceedings.neurips.cc/paper_files/paper/2022/file/50d005f92a6c5c9646db4b761da676ba-Paper-Conference.pdf) 2022 (not cited in the paper)
[**] [Learning Invariances using the Marginal Likelihood](https://arxiv.org/abs/1808.05563) 2018 (not cited in the paper) | Rebuttal 1:
Rebuttal: # General response
We are grateful for the time, effort and invaluable feedback provided by the reviewers. We next address general points raised jointly amongst reviewers, and proceed to respond to specific comments in the individual author rebuttals below.
Firstly, we are glad that you **found the work interesting and elegant** (1myY: "an elegant solution to the problem”; “Overall, this is a really interesting idea”; 3Y5V: “The method appears to be very elegant in offering a natural relaxation of underlying weight-sharing scheme”). Furthermore, reviewers appreciated that the **theory was presented in a clear manner** (3Y5V: ”The paper is well-written”; xU9K: ”Nice background section. In my opinion, it is written much better than other related papers in the area.”).
The reviewers also identify valuable points for improvement which we will gladly incorporate via minor edits in the camera-ready version of the paper. The main additions and experiments in response to reviewers’ concerns include:
1. We have added two experiments that show the methods effectiveness in the case of partial and misspecified symmetries, namely: MNIST with partial group structure (rotations sampled from a subset of $SO(2)$ (**Fig. 1a** in the attached PDF) and CIFAR10 with horizontal flips.
2. We have added a comparison of models in the low-data regime for fully rotated and partially augmented MNIST (**Fig. 4** in the attached PDF). In the presence of partial symmetries (half of the rotation angles) and symmetry misspecification (flips) our weight sharing method can find useful weightsharing patterns.
3. Reviewers have pointed out a comparison to reference [30] and the benefits of employing double stochasticity over single stochasticity. We have attached results for this baseline on the rotatedMNIST dataset trained for 100 epochs. Visual inspection of the weight sharing schemes suggest the method is not as effective in picking up the group structure.
4. We have incorporated results for CIFAR-10 for a standard CNN model that matches the effective kernel size used in our GCNN/WSCNN models, namely 128 channels (calculated as $|G| * \text{channels} = 4*32 = 128$) (**Table 1** in the attached PDF). We have also added this model as a baseline in the flipped CIFAR-10 experiment (**Fig. 1b** in the attached PDF). Despite the kernel constraints in our model, it achieves performance comparable to that of the unconstrained 128-channel CNN, but with significantly less learnable parameters (our 2.1 M vs. the 6.5 M of the CNN).
5. We have added a computational scaling analysis of our weight sharing layer, comparing it to a group convolutional model of the same dimensions (**Fig. 2 and 3** in the attached PDF). We highlight that regular group convolutions can be implemented via weight-sharing schemes, resulting in equal computational demands for both approaches. Since the weight-sharing approach applies the group transformation in parallel across all elements (as a result of matrix multiplication and reshape operations), our method can prove quite efficient. Regarding memory allocation, group convolutions are often implemented using *for-loops* over the group actions, and this sequential method imposes a less heavy memory burden since the operation is applied in series per group element. However, although scaling is quadratic w.r.t. the number of group elements and the domain size for weight-sharing, we mitigate this issue by use of the typically lower-dimensional support of convolutional filters (i.e., $|G| <= 16$ and $\text{kernel size} <= 7$), rendering our approach practical for a wider range of settings.
6. We thank the reviewers (1myY, uLBU) for pointing out relevant related literature and will update our related works section accordingly with your suggested references.
We hope that our additional experiments and clarifications were able to resolve any commonly raised points, and would be happy to respond to remaining concerns, if any, during the discussion period.
We further address each reviewers' concerns and questions in our individual responses, and thank once again for the valuable feedback that we believe will increase the quality of our contributions.
**References**
[30] Equivariance Discovery by Learned Parameter-Sharing. Raymond A. Yeh, Yuan-Ting Hu, Mark Hasegawa-Johnson, Alexander Schwing. Proceedings of The 25th International Conference on Artificial Intelligence and Statistics, PMLR 151:1527-1545, 2022.
[31] Meta-Learning Symmetries by Reparameterization. Allan Zhou, Tom Knowles and Chelsea Finn. ICLR 2021
Pdf: /pdf/1884d2d4234e0cbd78af6ca29b6b83d968af8ee7.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Boundary Matters: A Bi-Level Active Finetuning Method | Accept (poster) | Summary: In this paper, the authors aim to address the problem of active fine-tuning learning. The difference with active learning is that active fine-tuning models are pre-trained rather than trained from scratch. Therefore, the characteristics of the pre-trained model are important for this problem. While the uncertain methods of active learning disturb the quality of the pre-trained model, ActiveFT in active fine-tuning learning ensures stable training by selecting the most representative sample solution. The authors' idea lies in refining the boundaries by selecting samples through uncertainty based on representative samples. Their active fine-tuning framework, BiLAF, selects central and boundary samples to learn diversity and uncertainty. Besides, it has novel improvements, including unsupervised denoising and an iterative strategy for boundary sample selection. Finally, this paper conducts extensive experiments and ablation studies.
Strengths: The authors provide a new active fine-tuning framework that combines the advantages of traditional uncertainty methods with the state-of-the-art ActiveFT method. They also provide extensive data experiments and analyses to illustrate the effectiveness of the proposed method.
Weaknesses: Boundary sample selection is not discussed for fine-grained categorization, a case with less category differentiation. Besides, what is the gap between active fine-tuning learning and the upper performance of fine-tuning using all samples?
Technical Quality: 3
Clarity: 3
Questions for Authors: I have the following questions/comments for the authors:
1. Does using a pre-trained model to select fine-tuned samples amplify errors in the pre-trained model? I noticed significant performance differences between the different pre-trained models for Table 7.
2. Is it reasonable to consider samples that deviate from the center of the pseudo-class as noise?
3. On different datasets, when the labeled sample size is less than, which approximate range is boundary sample selection not required?
4. Is the proposed method stable for tasks like fine-grained classification?
5. The notation for intra-class and inter-class distances should be changed from Xi to Ci, using the cluster notation to describe intra-class and inter-class distances.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: For downstream task fine-tuning, which in many cases is application-specific, one of the problems I often encounter is how to determine the data collection conditions as best as possible in order to minimize costs. Therefore, the biggest problem I encountered was not having enough samples for classification.
Therefore, I suggest the authors to further improve the results for tasks with higher annotation costs, such as detection and segmentation tasks or domains requiring expert knowledge annotation such as remote sensing and medicine.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: >***Q1: Fine-grained classification?***
**R1:** Following your suggestion, we utilized the CUB-200-2011 dataset, which includes $200$ bird species with a total of $11,788$ images. According to the default configuration of the dataset, $5,994$ samples are used for training, with the remainder used for testing. Given the large number of classes, we used 10% of the data as Core Samples to select Boundary Samples, while all other parameters were set according to the default values reported in the paper. **The following table clearly demonstrates that our approach continues to hold a leading position in fine-grained classification datasets.**
|Percent|Select Number|Random|ActiveFT|BiLAF(Ours)|
|-|-|-|-|-|
|20%|1198|46.32|47.83|**48.53**|
|30%|1798|58.85|59.67|**60.52**|
|40%|2397|66.75|67.36|**68.31**|
|50%|2997|72.98|73.25|**74.06**|
>***Q2: What is the gap between active fine-tuning learning and the upper performance of fine-tuning using all samples?***
**R2:** The table presents the upper bound performance achieved by training with all available data. Despite a significant gap between training with a subset of samples and full selection, our method consistently outperforms random sampling at equivalent annotation rates.
|DataSet|0.5% Rand|0.5% BiLAF|1% Rand|1% BiLAF|2% Rand|2% BiLAF|100% Full|
|-|-|-|-|-|-|-|-|
|Cifar10|77.3|81.0|82.2|89.2|88.9|92.5|**99.0**|
|DataSet|1% Rand|1% BiLAF|2% Rand|2% BiLAF| 5% Rand|5% BiLAF|5% Rand|5% BiLAF| 100% Full |
|-|-|-|-|-|-|-|-|-|-|
|Cifar100|14.9|31.8|24.3|43.5|50.8|62.8|69.3|73.7|**90.5**|
|DataSet|1% Rand|1% BiLAF|2% Rand|2% BiLAF|5% Rand|5% BiLAF|100% Full|
|-|-|-|-|-|-|-|-|
|ImageNet|45.1|50.8|52.1|56.9|64.3|66.2|**81.5**|
>***Q3:Does using a pre-trained model to select fine-tuned samples amplify errors in the pre-trained model? Significant performance differences in Table 7.***
**R3:** In Table 7 and the principal experiments of our paper, **ViT-S models pretrained with iBOT and DINO frameworks exhibit comparable performance, whereas ResNet50 differs due to its unique architecture and model size**. Objectively speaking, variations in the samples selected can arise based on the quality of features and the upper limits of the pretrained models. Nonetheless, Table 7 underscores the universality of our approach, showcasing its capability to **effectively select superior samples across various pretrained models**.
>***Q4: Is it reasonable to consider samples that deviate from the center of the pseudo-class as noise?***
**R4:** 1. Similar to **typical denoising scenarios**, points significantly distant from their cluster are often deemed as noise. In our approach, we implement **localized denoising**, where each category has a center, and noise specific to that locality is removed. 2. In practice, due to inadequate feature learning, it's common for samples potentially belonging to two categories to intermingle at the boundaries. Therefore, our denoising aims to **clear these mixed areas to avoid erroneous sample selection**. This is a trade-off: denoising allows us to conservatively select boundary samples to address poorly learned features and classifications, while opting not to denoise would constitute a bolder choice.
>***Q5: On different datasets, when the labeled sample size is less than, which approximate range is boundary sample selection not required?***
**R5:** We employed the Core Sample Selection method to obtain different numbers of central points and then analyzed the benefits these central points brought. Specifically, we define ***Distance*** as the mean Euclidean distance from each sample to the nearest selected point in the feature space and investigate the ***Rate of Return*** (Incremental Benefit per Core Sample) in different range, , where *Rate of Return = Distance Difference / Core Number Difference between two adjacent columns.*
- CIFAR10
|Core Num|50|100|150|250|375|500|1000|1500|2500|5000|
|-|-|-|-|-|-|-|-|-|-|-|
|Distance|0.7821|0.7588|0.7472|0.7307|0.7203|0.7117|0.6896|0.6746|0.6506|0.6010|
|Rate of Return $\times 10^{-4}$|-|4.6575|2.3264|1.6422|0.8362|0.6887|0.4420|0.3002|0.2398|0.1984|
- CIFAR100
|Core Num|50|100|150|250|375|500|1000|1500|2500|5000|
|-|-|-|-|-|-|-|-|-|-|-|
|Distance|0.8378|0.8082|0.7913|0.7724|0.7564|0.7478|0.7229|0.7059|0.6800|0.6272|
|Rate of Return $\times 10^{-4}$|-|5.9221|3.3791|1.8906|1.2797|0.6845|0.4985|0.3404|0.2589|0.2112|
We found that the rate of return diminishes gradually, indicating that core samples are crucial in the early stages, while the benefits decrease significantly later on. **A clear demarcation point can serve as a guide for when to begin Boundary Sample Selection**, such as the range of 250-375 for CIFAR-10 and 375-500 for CIFAR-100. **This provides a simple yet effective guideline.** Additionally, in practical applications, we discovered that **introducing boundary points earlier may yield better results**, such as CIFAR100 with 1% (500 samples) annotation samples.
>***Q6: The notation for intra-class and inter-class distances should be changed from Xi to Ci.***
**R6:** Thank you for your suggestion. Given that our distance metric applies to each individual sample, I understand your recommendation to express $d_{intra}(x_j)$ as $d_{intra}(x_j, C_i)$ for greater clarity. We will adopt your suggestion and implement this change in the revised version.
>***Q7: Recommend Tasks with higher annotation costs, such as detection segmentation, remote sensing and medicine.***
**R7:** There are two aspects to consider: which data should be annotated and how to increase the speed of annotators. Due to time constraints and our lack of exposure to remote sensing and medical data, we plan to conduct further research in the future. As for detection and segmentation tasks, our experiments have also covered these, specifically in PASCAL VOC and ADE20k datasets.
We hope this response could help address your concerns, and wish to receive your further feedback soon.
---
Rebuttal 2:
Title: Looking Forward to Further Feedbacks
Comment: Thank you for your insightful suggestions. We believe we have comprehensively addressed your questions regarding the applicability of different pre-trained models, scenarios involving fine-grained classification, the rationale for denoising, and the threshold for selecting boundary samples.
It is worth noting that our method can maintain a leading performance across different pre-trained models and effectively enhance data quality in fine-grained classification scenarios to achieve optimal performance. Additionally, we have proposed a simple yet effective guideline as a threshold for selecting boundary samples.
We are wondering whether you have any additional questions or comments regarding our response to your review comments. We will do our best to address them.
We appreciate your time and effort reviewing our manuscript. Thanks for your consideration! | Summary: The paper proposes an active fine-tuning method that considers both diversity and uncertainty. This method selects uncertain samples through an unsupervised denoising approach and boundary score evaluation. The efficiency and effectiveness of this method, which involves selecting central and boundary samples, have been validated through multiple experiments.
Strengths: 1. The writing is generally clear, and the content is relatively comprehensive.
2. The method considers both diversity and uncertainty, which is a good idea.
3. The experiments are quite thorough.
Weaknesses: 1. The main diagram, Figure 2, lacks detail and clarity, and its meaning overlaps with the schematic in Figure 1. Figure 2 should focus on explaining how the methods "Boundary Score Calculation" and "Iterative Selection and Removal" are implemented (in detail) rather than simply outlining the overall process.
2. Is there consistency between the decision boundaries of the K pseudo-classes and the decision boundaries of the true task categories? Are the decision boundaries of the unsupervised method consistent with the decision boundaries when the pre-trained model is fine-tuned on downstream tasks? (Different types and capabilities of models have different decision boundaries.) It seems these issues were neither considered nor analyzed.
3. The theoretical foundation is weak. Is it necessary to introduce data on class decision boundaries? Is there a theoretical basis for applying "diversity and uncertainty" sampling to active fine-tuning methods? Is there a theoretical basis for the design and implementation methods?
4. Exploration of the number of uncertain sample points: How is K determined? Having too many such sample points presents two problems: incorrectly labeled samples and disruption of learning the general features of the categories.
5. "but neglects the contributions of samples to local boundaries."
Is the statement in the abstract deviating from the original meaning? Should it be "but neglects the contributions of local boundary samples" instead of "but neglects the contributions of samples to local boundaries"?
Technical Quality: 2
Clarity: 2
Questions for Authors: Please see the weakness.
Confidence: 4
Soundness: 2
Presentation: 2
Contribution: 3
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: >***Q1: Figure 2 should be modified.***
**R1:** Thank you for your suggestion. In the revised version, we will enhance the figure to improve its detail and clarity.
>***Q2: Is there consistency between the decision boundaries of the K pseudo-classes and the decision boundaries of the true task categories? Are the decision boundaries of the unsupervised method consistent with the decision boundaries when the pretrained model is fine-tuned on downstream tasks?***
**R2:** Yes, these conditions are consistent! **We conducted linear probing on all the samples with true labels using features from both the pre-trained model and the oracle model (fine-tuned on all samples)**. We analyzed whether samples selected using different methods—Random, ActiveFT, BiLAF (ours)—tend to be near the decision boundaries. We used two metrics for this analysis: 1) **Entropy**, where a higher value indicates greater uncertainty and a propensity towards boundary samples. 2) **Prob_Diff** (probability difference between the top two classes) calculated as the difference between the highest and second-highest probabilities. A smaller value indicates that the sample is closer to the boundary between these two classes.
We computed the mean values for all samples and the top 50 samples. We found that samples selected based on pre-trained features exhibit higher Entropy and smaller Probability Differences in both scenarios, sufficiently demonstrating that **our method aligns with the true decision boundaries and the samples selected in unsupervised pretrained features aligns with the pretrained model is finetuned on downstream tasks**.
>***Q3: The theoretical foundation is weak. Is it necessary to introduce data on class decision boundaries? Is there a theoretical basis for applying "diversity and uncertainty" sampling to active fine-tuning methods? Is there a theoretical basis for the design and implementation methods?***
**R3:**
1. **Theoretical Assurance**: Assuming a binary classification problem that is linearly separable. Our solid theoretical foundation lies in Support Vector Machines (SVM). The objective of SVM is to find a hyperplane that maximizes the margin, enhancing the classifier's generalization ability. Only the points on the boundary (support vectors) contribute to the margin, while internal points do not affect its size. This classical design emphasizes the **importance of boundary samples**.
2. **Related Work**: Previous works in Active Learning invariably involve diversity and uncertainty. Diversity aims to fit the original data distribution better with fewer samples. As the labeled data increases, allocating the remaining budget to uncertainty becomes common.
3. **Design Philosophy**: Faced with multiple challenges in a single iteration scenario for data selection, we must determine class centers and choose boundary samples. We also need to consider the potential noise, where different classes mix at the boundaries and each class may have multiple feature centers. Thus, we select more pseudo-classes than actual classes to ensure coverage of different classes, and we employ denoising to reduce sample mixing at boundaries and avoid noise as much as possible. **Denoising process mainly targets noise in non-ideal scenarios.** **Opponent Penalty and Iterative Selection encourage the boundary samples with the regional diversity.** Overall, based on solid boundary theory, we have designed each part for real-world noisy scenarios, using a set default of parameters throughout except for the number of pseudo-classes, which varies with different sample sizes, further demonstrating our method's effectiveness.
4. **Oracle Verification**: We apply our method an feature Oracle model(finetuned on all samples). At 5% budget of CIFAR100, we achieved 65.7% and at 10% budget, we reached 75.3%—both over 1.5% higher than selecting central points alone, **proving the practical upper bound**. We also confirmed our method's **consistency** with actual conditions in Q2. Thus, this also validates the effectiveness of our approach from this perspective.
>***Q4: Exploration of the number of uncertain sample points: How is K determined? Having too many such sample points presents two problems: incorrectly labeled samples and disruption of learning the general features of the categories.***
**R4:**
1. **Core Number K**: We conduct experiments to estimate an approximate threshold for the Core Number needed.
2. **Boundary Sample Selection**: Our method's components tackle the problem:
- **Denoising**: Removes highly uncertain boundary points to prevent class mixing, conservatively choosing more accurate points.
- **Opponent Penalty**: Encourages diverse explorations across boundaries, allowing a more varied distribution.
- **Iterative Selection**: Removes nearby points of selected samples, maintaining diversity by choosing boundary-nearer points instead of regional centers.
3. **Practical Experiments**: Since exact sample centers are unknown, we use numerous pseudo-classes, K, with boundaries between same-label pseudo-classes acting as internal points.
4. **Experiment Consistency**: This flexible framework can achieve SOTA results using the same default hyperparameters. The ablation study showed that a suitable K enhances sample quality which represent much room for improvement.
>***Q5: Should it be "but neglects the contributions of local boundary samples" in abstract?***
**R5:** We will adopt your suggestion in the revised version, as it appears more reasonable. Thank you for helping to further enhance the readability of our paper.
Thank you for valuable time and insightful comments. We hope our responses have addressed your concerns. We welcome further discussions and sincerely appreciate it if you could reconsider your rating.
[Finally, experiments for Q2 and Q4 will be detailed in official comments due to the characters limitation.]
---
Rebuttal 2:
Title: Supplementary Experimental Results
Comment: >***Experiments For Q2.***
Experiments are as follows:
|CIFAR10(Pretrain)|Selected Nums|Entropy $\uparrow$|Prob_Diff$\downarrow$|Entropy(Top50)$\uparrow$|Prob_Diff(Top50)$\downarrow$|
|-|-|-|-|-|-|
|Random|250|0.0935|0.9486|0.4260|0.7536|
|ActiveFT|250|0.0424|0.9747|0.2073|0.8743|
|BiLAF(ours)|250|**0.1023**|**0.9366**|**0.4769**|**0.6924**|
|Random|500|0.0960|0.9416|0.7022|0.5056|
|ActiveFT|500|0.0433|0.9763|0.3690|0.7799|
|BiLAF(ours)|500|**0.1149**|**0.9273**|**0.7089**|**0.4500**|
|Random|1000|0.0955|0.9430|0.9181|0.3081|
|ActiveFT|1000|0.0849|0.9495|0.8810|0.3560|
|BiLAF(ours)|1000|**0.1461**|**0.9064**|**1.0262**|**0.2005**|
|CIFAR100(Pretrain)|Selected Nums|Entropy $\uparrow$|Prob_Diff$\downarrow$|Entropy(Top50)$\uparrow$|Prob_Diff(Top50)$\downarrow$|
|-|-|-|-|-|-|
|Random|500|**0.6240**|**0.7295**|**2.2516**|**0.0876**|
|ActiveFT|500|0.2962|0.8664|1.5663|0.2369|
|BiLAF(ours)|500|0.3933|0.8167|1.7832|0.1423|
|Random|1000|**0.5317**|**0.7766**|2.2375|0.0594|
|ActiveFT|1000|0.3650|0.8430|2.0812|0.0833|
|BiLAF(ours)|1000|0.4751|0.7815|**2.2606**|**0.0542**|
|Random|2500|0.5253|0.7749|2.6790|0.0232|
|ActiveFT|2500|0.4851|0.7936|2.6653|0.0206|
|BiLAF(ours)|2500|**0.5795**|**0.7476**|**2.7196**|**0.0192**|
|Random|5000|0.5442|0.7652|2.8791|0.0151|
|ActiveFT|5000|0.5197|0.7768|2.8487|0.0118|
|BiLAF(ours)|5000|**0.6219**|**0.7336**|**2.9194**|**0.0090**|
|CIFAR10(Oracle)|Selected Nums|Entropy $\uparrow$|Prob_Diff$\downarrow$|Entropy(Top50)$\uparrow$|Prob_Diff(Top50)$\downarrow$|
|-|-|-|-|-|-|
|Random|250|0.001512|0.999831|0.002437|0.999729|
|ActiveFT|250|0.001521|0.999830|0.002403|0.999717|
|BiLAF(ours)|250|**0.001541**|**0.999827**|**0.002473**|**0.999714**|
|Random|500|0.001483|0.999835|0.002782|0.999676|
|ActiveFT|500|0.001542|0.999828|0.002958|0.999644|
|BiLAF(ours)|500|**0.001605**|**0.999820**|**0.003020**|**0.999641**|
|Random|1000|0.001551|0.999823|0.003543|0.999575|
|ActiveFT|1000|0.001527|0.999829|0.003472|0.999586|
|BiLAF(ours)|1000|**0.001598**|**0.999820**|**0.003588**|**0.999567**|
|CIFAR100(Oracle)|Selected Nums|Entropy $\uparrow$|Prob_Diff$\downarrow$|Entropy(Top50)$\uparrow$|Prob_Diff(Top50)$\downarrow$|
|-|-|-|-|-|-|
|Random|500|**0.049238**|**0.992216**|**0.176811**|**0.956287**|
|ActiveFT|500|0.040140|0.993335|0.124594|0.962662|
|BiLAF(ours)|500|0.042792|0.994406|0.137114|0.965457|
|Random|1000|**0.047730**|0.993905|0.202417|0.963037|
|ActiveFT|1000|0.042763|0.993913|0.198896|0.961337|
|BiLAF(ours)|1000|0.045447|**0.993854**|**0.203469**|**0.960462**|
|Random|2500|0.046654|0.993407|0.297133|0.919803|
|ActiveFT|2500|0.047036|0.993245|0.303767|0.919122|
|BiLAF(ours)|2500|**0.047357**|**0.993103**|**0.309907**|**0.917868**|
|Random|5000|0.047028|0.993741|0.407092|0.897825|
|ActiveFT|5000|0.045173|0.994264|0.340326|0.925870|
|BiLAF(ours)|5000|**0.049476**|**0.992885**|**0.466601**|**0.842029**|
Therefore, there is consistency between the decision boundaries of the K pseudo-classes and the decision boundaries of the true task categories. The decision boundaries of the unsupervised method is consistent with the decision boundaries when the pretrained model is finetuned on downstream tasks.
>***Experiments For Q4.***
We employed the Core Sample Selection method to obtain different numbers of central points and then analyzed the benefits these central points brought. Specifically, we define ***Distance*** as the mean Euclidean distance from each sample to the nearest selected point in the feature space and investigate the ***Rate of Return*** (Incremental Benefit per Core Sample) in different range, where *Rate of Return = Distance Difference / Core Number Difference between two adjacent columns*.
- CIFAR10
|Core Num|50|100|150|250|375|500|1000|1500|2500|5000|
|-|-|-|-|-|-|-|-|-|-|-|
|Distance|0.7821|0.7588|0.7472|0.7307|0.7203|0.7117|0.6896|0.6746|0.6506|0.6010|
|Rate of Return $\times 10^{-4}$|-|4.6575|2.3264|1.6422|0.8362|0.6887|0.4420|0.3002|0.2398|0.1984|
- CIFAR100
|Core Num|50|100|150|250|375|500|1000|1500|2500|5000|
|-|-|-|-|-|-|-|-|-|-|-|
|Distance|0.8378|0.8082|0.7913|0.7724|0.7564|0.7478|0.7229|0.7059|0.6800|0.6272|
|Rate of Return $\times 10^{-4}$|-|5.9221|3.3791|1.8906|1.2797|0.6845|0.4985|0.3404|0.2589|0.2112|
We found that the rate of return diminishes gradually, indicating that core samples are crucial in the early stages, while the benefits decrease significantly later on. **A clear demarcation point can serve as a guide for when to begin Boundary Sample Selection**, such as the range of 250-375 for CIFAR-10 and 375-500 for CIFAR-100. **This provides a simple yet effective guideline.** Additionally, in practical applications, we discovered that **introducing boundary points earlier may yield better results**, such as CIFAR100 with 1% (500 samples) annotation samples.
---
Rebuttal 3:
Title: Looking Forward to Further Feedbacks
Comment: Thank you for your insightful suggestions. We believe we have comprehensively addressed your questions regarding the consistency of decision boundaries, the basis for designing boundary samples, and the threshold for the number of uncertain samples.
It is worth noting that our method aligns with the true decision boundaries, and the samples selected from unsupervised pre-trained features are consistent with those used when the pre-trained model is fine-tuned on downstream tasks. We have validated the effectiveness of boundary samples from three perspectives: theoretical guarantees, related work, and Oracle experiments. Ultimately, we have proposed a simple yet effective guideline as a threshold for selecting boundary samples.
We are wondering whether you have any additional questions or comments regarding our response to your review comments. We will do our best to address them.
We appreciate your time and effort reviewing our manuscript. Thanks for your consideration!
---
Rebuttal 4:
Comment: Thank you very much for your recognition of our paper and the increased score! We also appreciate the valuable suggestions you've provided. We will make further improvements in the revised version to meet your expectations. | Summary: In this paper, the authors introduce a novel Bi-Level Active Finetuning Framework (BiLAF) designed to address the limitations of existing active learning methods in the context of the pretraining-finetuning paradigm. The framework aims to optimize sample selection for finetuning models within a limited annotation budget. The authors propose an innovative unsupervised denoising technique to eliminate noisy samples and use a newly designed boundary score metric for iterative boundary sample selection. Extensive experiments demonstrate that BiLAF outperforms existing methods across various datasets and tasks.
Strengths: - The bi-level framework that combines core sample selection with boundary sample selection effectively addresses limitations in existing active learning methods.
- The novel unsupervised denoising technique effectively eliminates noisy samples, improving the reliability of sample selection.
- Extensive experiments on multiple datasets and tasks consistently show BiLAF outperforms state-of-the-art methods.
- The paper is well-organized with a clear explanation of the motivation, proposed method, and experimental results.
Weaknesses: Activate Finetuning is advantageous for model fine-tuning in scenarios with limited data, and the motivation behind the proposed BiLAF is clear. However, I have concerns regarding the generalizability of this method to different data sizes. As shown in Table I, there is a significant performance drop in the CIFAR10 dataset under the 0.5% setting. This decline might be attributed to the small size of CIFAR10 images and the low 0.5% ratio, making it challenging to select and denoise boundary samples. It raises the question of whether BiLAF requires a certain threshold of fine-tuning data to be effective, which warrants further investigation by the authors. Additionally, the CVPR 2024 paper (see reference below) reports on the performance of CIFAR10 under 0.1% and 0.2% settings, which is highly relevant. I believe that Activate Finetuning would be more meaningful in scenarios with extremely limited data (e.g., few-shot fine-tuning), where changing the random seed can lead to significant accuracy fluctuations. Therefore, I recommend the authors include experiments and analyses on such scenarios. Furthermore, comparative experiments with other settings in the paper should be included to substantiate the method's effectiveness.
Moreover, when the amount of fine-tuning data increases, BiLAF's performance becomes comparable to the baseline ActiveFT. Hence, it would be insightful for the authors to discuss whether BiLAF remains effective with larger amounts of fine-tuning data, such as 20% of the ImageNet 1k dataset. This additional discussion would enhance the paper's comprehensiveness.
Xu, Wenshuai, et al. "ActiveDC: Distribution Calibration for Active Finetuning." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024.
Technical Quality: 3
Clarity: 3
Questions for Authors: Same as weakness.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: As described in the Weakness section, there might be some critical aspects that are not verified enough. The authors are encouraged to show additional results to support their claims.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: >***Q1: Threshold: Whether BiLAF requires a certain threshold of fine-tuning data to be effective?.***
**R1:** This is an excellent question. The performance gap with just 0.5% of CIFAR-10 data indeed triggers considerations about scale. In traditional active learning, the applicability of different methods varies with scale. The classic Coreset[1] method aims to cover all samples with the selected points, whereas ProbCover[2] notes that Coreset performs poorly when the data scale is very small, leading to the proposal of a coverage radius to ensure each selected point controls a fixed area. In our scenario, it is evident that using central points to fit different dense distributions appears to be a more rational choice when the budget is extremely limited, without using additional samples for boundary selection. We believe that this situation is both normal and objectively present.
Following this, we **explored what such a threshold** might look like. We employed the Core Sample Selection method to obtain different numbers of central points and then analyzed the benefits these central points brought. Specifically, we define ***Distance*** as the mean Euclidean distance from each sample to the nearest selected point in the feature space and investigate the ***Rate of Return*** (Incremental Benefit per Core Sample) in different range, where *Rate of Return = Distance Difference / Core Number Difference between two adjacent columns.*
- CIFAR10
|Core Num|50|100|150|250|375|500|1000|1500|2500|5000|
|-|-|-|-|-|-|-|-|-|-|-|
|Distance|0.7821|0.7588|0.7472|0.7307|0.7203|0.7117|0.6896|0.6746|0.6506|0.6010|
|Rate of Return $\times 10^{-4}$|-|4.6575|2.3264|1.6422|0.8362|0.6887|0.4420|0.3002|0.2398|0.1984|
- CIFAR100
|Core Num|50|100|150|250|375|500|1000|1500|2500|5000|
|-|-|-|-|-|-|-|-|-|-|-|
|Distance|0.8378|0.8082|0.7913|0.7724|0.7564|0.7478|0.7229|0.7059|0.6800|0.6272|
|Rate of Return $\times 10^{-4}$|-|5.9221|3.3791|1.8906|1.2797|0.6845|0.4985|0.3404|0.2589|0.2112|
We found that the rate of return diminishes gradually, indicating that core samples are crucial in the early stages, while the benefits decrease significantly later on. **A clear demarcation point can serve as a guide for when to begin Boundary Sample Selection**, such as the range of 250-375 for CIFAR-10 and 375-500 for CIFAR-100. **This provides a simple yet effective guideline.** Additionally, in practical applications, we discovered that **introducing boundary points earlier may yield better results**, such as CIFAR100 with 1% (500 samples) annotation samples.
Similarly, how to allocate proportions remains a question. Our ablation study shows that for different budgets, there are varying optimal core numbers, which remains an open question. To some extent, we can determine this based on the rate of return and the number of sample categories. In the paper, we adhered to a uniform hyperparameter setting, which actually demonstrates that our framework offers more flexible parameter choices with significant potential for performance improvement.
[1] Sener, Ozan, and Silvio Savarese. "Active Learning for Convolutional Neural Networks: A Core-Set Approach." ICLR18
[2] Yehuda, Ofer, et al. "Active learning through a covering lens." NIPS22
>***Q2: Limited Data Condition: ActiveDC reports on the performance of CIFAR10 under 0.1% and 0.2% settings. Active Finetuning would be more meaningful in scenarios with extremely limited data?***
**R2:** ActiveDC employs pseudo-labeling from semi-supervised learning, which pertains to how to fine-tune after sample selection, rather than the selection of the samples themselves. Therefore, the appropriate comparisons should be with other semi-supervised training or fine-tuning methods. Following Reviewer V2Vc's suggestion in Q2, we experimented with KNN and Linear Probing in scenarios with very small data volumes and achieved results that significantly surpassed those of ActiveDC.
|CIFAR10|0.1%|0.2%|
|-|-|-|
|Random+KNN|64.2|76.6|
|Random+Linear|67.1|78.7|
|Finetune(ActiveDC)|61.3|73.1|
We also found that these basic methods become ineffective as the data volume increases. Consequently, we believe that **active selection for full-scale fine-tuning should focus on scenarios where the data volume is not extremely small**. Our method selects higher-quality samples and achieves superior results across different fine-tuning paradigms. We will reference the ActiveDC paper in the revised version and discuss the differences between our work and ActiveDC.
>***Q3: Larger Data Condition: How about larger amounts of finetuning data, such as 20% of ImageNet.***
**R3:** **We have supplemented our results with 10% and 20% budget scenarios on ImageNet, and our method continues to maintain a strong advantage.** Theoretically, as data volume increases, the selection method ActiveFT tends to lean towards Random, thus tending towards a uniform distribution. However, our method effectively identifies boundaries, providing better separability. Empirically, our method has shown impressive results. **The Improvement Ratio has increased**, where the Improvement Ratio is defined as *Improvement Ratio=(BiLAF - Random) / (ActiveFT - Random)*. In active learning, it is normal for accuracy differences to decrease as data volumes increase, making the improvement ratio a reasonable metric to assess performance.
|Selection Ratio|Random|ActiveFT|BiLAF(ours)|Improvement Ratio|
|-|-|-|-|-|
|1%|45.1|50.1|50.8|1.14|
|2%|52.1|55.8|56.9|1.30|
|5%|64.3|65.3|66.2|1.9|
|10%|71.2|71.8|72.5|2.17|
|20%|74.5|74.8|75.2|2.33|
Overall, we **have effectively analyzed our method across different scales**, proposed effective boundary metrics for smaller data volumes, and empirically demonstrated the superior results and feasibility of our method with larger data volumes. We hope our responses have addressed your concerns. We welcome further discussions and sincerely appreciate it if you could reconsider your rating.
---
Rebuttal 2:
Title: Looking Forward to Further Feedbacks
Comment: Thank you for your insightful suggestions. We believe we have comprehensively addressed your questions regarding the data scale used for fine-tuning the model and the scenarios where our method is applicable.
It is worth noting that we have proposed a simple yet effective guideline as a threshold for selecting boundary samples, which enhances the flexibility of our method. Additionally, we have conducted supplementary experiments that demonstrate our method maintains its advantages and is even more effective when applied to larger datasets.
We are wondering whether you have any additional questions or comments regarding our response to your review comments. We will do our best to address them.
We appreciate your time and effort reviewing our manuscript. Thanks for your consideration!
---
Rebuttal 3:
Comment: Thank you for your recognition of our paper and the increased score! | Summary: This paper proposes a Bi-Level Active Finetuning Framework (BiLAF) for optimizing sample selection in the pretraining-finetuning paradigm. BiLAF combines global diversity and local decision uncertainty through two stages: core sample selection and boundary sample selection. Without requiring labels, the method successfully identifies pseudo-class centers, employs a tailored denoising technique, and iteratively selects boundary samples. Experimental results demonstrate that BiLAF consistently outperforms existing baseline methods across various vision tasks, demonstrating its superior efficacy in improving model performance.
Strengths: 1. The paper is well-structured, with clear organization into subsections that introduce the different stages of the method. The use of an algorithm to summarize the entire process is effective.
2. The approach not only focuses on the centers of each class but also pays attention to the boundary samples between classes, effectively balancing global diversity and local decision uncertainty when selecting samples for annotation.
3. The iterative selection and removal strategy, along with the opponent penalty, effectively prevents the aggregation of multiple samples near the same pseudo-class boundary that faces the same opposing pseudo-class center.
Weaknesses: 1. There is ambiguity in the use of symbols in the method description, leading to unclear explanations.
2. The authors claim that the method is effective for imbalanced datasets but do not provide a thorough theoretical explanation. Additionally, experimental results show that the improvement over ActiveFT on the CIFAR100LT dataset with a 15% annotation ratio is only 0.3%, whereas the improvement is much more significant on balanced datasets. This contradicts the authors' claim. The authors explain that "the default denoising removal ratio parameters might remove minority samples in long-tail distributions," which highlights a potential issue with their algorithm design under long-tail conditions, conflicting with their earlier statement.
3. The ablation study shows that the inclusion of the opponent penalty contributes little to the performance improvement and even causes performance degradation under lower budgets. The necessity and design of the opponent penalty need to be reconsidered.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. What does c_i in equation (1) refer to? Previously, c_i was used to denote the cluster centers, but here it seems the authors use c_i to represent the assignment of f_j. Additionally, what does a_i of X_i in section 3.3.2 represent? The inconsistent use of symbols is confusing.
2. On what basis do the authors claim that this algorithm is effective on imbalanced datasets? Particularly when the authors state that classes with fewer samples might disappear during the denoising and iterative selection processes.
3. In the explanation of Fig. 3, the authors state that "Pentagrams represent the selected core samples, while circles denote the chosen boundary samples." However, the figure shows that the learned center points are still biased towards the boundaries. Does this not risk the model focusing too much on the boundaries and neglecting the intra-class distribution characteristics?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors acknowledge the issue that in long-tail scenarios, the denoising methods might remove outliers that are key samples. However, what the authors do not mention is that the numerous hyperparameters in their algorithm can lead to difficulties in achieving optimal performance and even cause instability in different scenarios.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ***Important Notes***. Reviewer’s recognition of "numerous hyperparameters can cause instability in different scenarios" in Limitations is not justified. Except *Core Number*, we **consistently employs the same hyperparameters across all six different tasks**, such as *nearest neighbors number*, *denoising rate*, *clustering fraction* and *opponent penalty*. The adjustment in Core Number is tailored to the varying class numbers across tasks, and we **ensure uniformity in Core Number for different budgets within the same task**. This uniformity **underscores the stability and scalability of our framework**. Changes in these parameters could potentially elevate the performance ceiling of our method. In our Ablation Study, alterations in the Core Number and Core Selection Method indicated further enhancements.
>***Q1: There is ambiguity in the use of symbols, such as $c_i$ in equation (1), $a_i$ of $X_i$ in Sec.3.3.2.***
**R1:** The sample set is $X=\{x_1,\ldots,x_N\}$, where the subscript $i$ represents the sample $x_i$ and its corresponding feature $f_i$. In Section 3.2, Core Sample Selection identifies $|C|$ center points, represented by the index set $C=\{c_1,\ldots,c_K\}, c_i\in[N]$. In Section 3.3.1, Equation (1), for each sample $x_j, j\in[N]$, we find its nearest center point $f_{c_i}$ in the feature space, which is closest to its feature $f_j$. We then assign this sample $x_j$ to the cluster controlled by the center $c_i$, denoted as $U_i$.
In Sec.3.3.2, we introduce $a_{i,j}$, where $j \in [|U_i|]$, to represent the index of the $j$-th element in the set $U_i$ controlled by the $i$-th center $c_i$. Since subsequent operations are performed on each set $U_i$ individually, $a_{i,j}$ serves as the index of the sample $x_{a_{i,j}}$ and its feature $f_{a_{i,j}}$. Actually, $U_i$ records all the indices of samples $X_i$ belong to the center $c_i$.
The variable $j$ serves as an enumeration variable in different contexts, which might have led to some ambiguity. We will address these ambiguities and provide clearer explanations in the revised version.
>***Q2: Why is the method effective on imbalanced datasets? Improvement on the CIFAR100LT dataset with 15% ratio is only 0.3% with the claim "the default denoising removal ratio parameters might remove minority samples".***
**R2:** Firstly, these methods are not specifically tailored for long-tailed distributions. Our method, which focuses on boundary-based approaches, exhibits strong representational capabilities in general scenarios, providing us with a natural advantage. This supports our claim of the method's universality based on its performance in long tailed dataset.
1. Denoising involves a trade-off; excessive denoising can eliminate minority classes at the boundary. However, totally without any denoising, selecting boundary samples based on minority class centers could mistakenly include majority class samples because minority and majority classes are often close in boundaries.
2. Objectively, as the annotation rate increases, ActiveFT tends to select long-tailed samples as well, which may reduce the gap as the data volume grows.
3. The default core number and denoising ratio were selected in the paper, which could potentially exclude long-tailed samples due to their isolated distribution or outlier status. Upon adjusting the Core Number and Denoising Ratio in experiments, we discovered that the model has considerable improvement.
|Denoising Ratio|0%|5%|10%|20%|
|-|-|-|-|-|
|Core Number 2.5%|37.10±0.60|37.61±0.66|37.33±0.71|37.04±0.89|
|Core Number 5%|37.24±0.56|**37.85**±0.59|37.67±0.64|37.17±0.78|
>***Q3: The necessity and design of the opponent penalty need to be reconsidered with little contribution to the performance improvement.***
**R3:** In Ablation Study, we observed an intriguing trend: the negative impact of the components examined in IDs 1-4 generally diminishes as the number of selected samples increases, while the benefits of the opponent penalty grow with a larger sample size. The design intention behind this metric is to prevent the selection of too many similar boundary points, which can degrade model performance. At sample sizes of 5% and 10%, the opponent penalty yields an accuracy improvement of over 0.5%, which we find significant.
As the important notes explained, we used the same hyperparameters in all experiments. In practice, parameters such as Core Number, Denoising Rate, and Opponent Penalty can be further tailored to specific scenarios to enhance the model's training capacity. This metric can also be applied flexibly depending on the amount of data selected.
>***Q4: Learned center points are still biased towards the boundaries? Does this not risk the model focusing too much on the boundaries and neglecting the intra-class distribution characteristics?***
**R4:**
The possible reasons are 1)Each class have multiple centers 2)TSNE visualizes data in two dimensions, which can be the centers in high-dimensional space. Actually, in our method, Opponent Penalty **encourages diverse explorations across boundaries**, allowing a more varied distribution. Iterative Selection removes nearby points of selected samples, **maintaining diversity by choosing boundary-nearer points instead of regional centers**. What's more, we might sample multiple points within a class as pseudo-centers. Boundary points between pseudo-classes of the same label can act as internal points of the class to better fit the distribution. Therefore, our method not only focuses on the classification boundaries but can also enhance the intra-class distribution.
Thank you for valuable time and insightful comments. We hope our responses have addressed your concerns. We welcome further discussions and sincerely appreciate it if you could reconsider your rating.
---
Rebuttal 2:
Title: Looking Forward to Further Feedbacks
Comment: Thank you for your insightful suggestions. We believe we have comprehensively addressed your questions regarding the effectiveness of our method in long-tail problems, the necessity of the opponent penalty design, and the selection of sample distributions.
It is worth noting that our method maintained default parameters across all experiments, demonstrating the universality of the model. The design of different components not only preserves this universality but also enhances the flexible architecture and performance ceiling of our method. Additional experiments have confirmed that our approach further improves performance on long-tail tasks.
We are wondering whether you have any additional questions or comments regarding our response to your review comments. We will do our best to address them.
We appreciate your time and effort reviewing our manuscript. Thanks for your consideration!
---
Rebuttal Comment 2.1:
Comment: Thanks very much for the feedback. My raised issues have been fully addressed. After going through the authors' responses as well as other reviewers' comments and the whole review process. I am glad to raise my score by one point.
---
Reply to Comment 2.1.1:
Comment: Thank you for your recognition of our paper and the improved score! We also appreciate the valuable suggestions you provided. We will incorporate these insights into the revised version of our paper, enhancing the clarity of our expressions and formulas to improve the paper's readability. | Rebuttal 1:
Rebuttal: We are grateful to the reviewers for their time and insightful feedback. Overall, we are heartened by their recognition of our paper's clear writing and structure (V2Vc, 2Ep4, 8trC, kN8s), insightful balance of diversity and uncertainty (2Ep4, kN8s, vV4Z), technical novelty and effectiveness (2Ep4, 8trC), extensive experiments and analysis (V2Vc, 8trC, kN8s, vV4Z) with the state-of-the-art performance (V2Vc, 8trC).
We have carefully considered and responded to each of the critical insights and suggestions provided by the reviewers. Our aim is to address all concerns and enhance our work through this collaborative process. We will incorporate these constructive comments into the revised version of our paper. We are confident that incorporating the reviewers' suggestions will significantly improve the quality of our paper and contribute to the field. | NeurIPS_2024_submissions_huggingface | 2,024 | Summary: This paper introduces BiLAF, a novel approach for selecting boundary examples alongside core samples to enhance the fine-tuning of pre-trained models for downstream tasks. Specifically, the boundary selection strategy leverages the distinction between intra- and inter-class distances within the pre-trained feature space and incorporates an opponent penalty to promote diversity across different boundaries. Experiments and ablation studies demonstrate the effectiveness of the proposed BiLAF in achieving state-of-the-art results.
Strengths: - The paper is well-written and easy to follow.
- In addition to classification tasks, the experiments include different scenarios such as detection and segmentation. In these settings, BiLAF demonstrates the state-of-the-art results.
- Ablation studies and execution time are included in the analysis to help provide a deeper understanding of the proposed BiLAF.
Weaknesses: - BiLAF seems to heavily rely on pre-trained features, as all of its data selection processes are based on them. BiLAF's reliance on pre-trained features for data selection raises concerns due to potential discrepancies between pre-training and fine-tuning tasks. Note that the features from pre-trained classifiers are commonly used in traditional active learning because the labeled and unlabeled datasets are usually from the same task. However, this might not hold true in general pre-training and fine-tuning paradigms. Moreover, Line 120 mentions that “By leveraging pre-trained models, data samples are mapped to robust feature representations that elucidate the relationships among samples, their intra-class counterparts, and inter-class samples from diverse classes.” It remains unanswered and uncertain to what extent these pre-trained features are robust enough to effectively apply BiLAF.
- In the typical pre-training and fine-tuning paradigm, full fine-tuning may not always be the optimal approach. For instance, when we only have limited/few-shot data, techniques like linear probing or even nearest-neighbor classifiers can yield better results. Furthermore, the choice of fine-tuning method can also be influenced by the similarity between pre-training and downstream tasks [1]. However, the paper lacks a discussion of this aspect and solely focuses on full fine-tuning, neglecting the potential benefits of alternative methods. Specifically, does BiLAF remain necessary for selecting high-quality data points, if we can apply more suitable fine-tuning methods? Or would simple random sampling suffice?
[1] Head2Toe: Utilizing Intermediate Representations for Better Transfer Learning, ICML 2022.
Technical Quality: 3
Clarity: 3
Questions for Authors: Besides the weakness shown in the above section, please also see the following questions:
- The efficacy of pre-trained features for the fine-tuning task is essential for BiLAF. Would there be a consistent improvement if we first apply unsupervised pre-training on the unlabeled fine-tuning set?
- Algorithm 1's pseudo-code indicates that ActiveFT is the first stage of BiLAF. However, Table 3 suggests that BiLAF is faster than ActiveFT. Why BiLAF is faster than ActiveFT if it needs to run ActiveFT first?
- It is an interesting idea to consdier an opponent penalty to encourage diversity across boundaries. I wonder if this penalty has varying effects on different pseudo classes. Specifically, does it help more on a pseudo class with more neighboring classes (which could potentially be the more confusing class), like the pink one in Figure 3, while helping less for those on the border, such as the green one Figure 3?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: In the appendix, the paper acknowledges a limitation in the proposed method, as it is based on general features. However, as I indicated in the weakness section, the paper lacks a discussion/study to investigate the severity of this limitation and its potential impact on the performance.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: >***Q1: Concern about pre-trained features. Would there be a consistent improvement if we first apply unsupervised pre-training on the unlabeled fine-tuning set?***
**R1:**
1. Our main experiment utilized a checkpoint pre-trained on ImageNet using DINO. We conducted studies both **on consistent tasks (ImageNet) and inconsistent ones (CIFAR & Others)**, demonstrating that our method is effective regardless of whether the pre-trained features come from the same task.
2. Furthermore, we explored features trained **using different paradigms and architectures**, such as iBOT and DINO, as well as ResNet50 and DeiT-S, detailed in Appendix F’s Ablation Study. This highlights the generalizability of our approach across different feature sets.
3. It is widely acknowledged and applied that many pre-trained models, such as DINOv2 and CLIP, claim robust transferability across diverse downstream tasks after extensive pre-training on large datasets.
4. Due to constraints of time and resources, we were unable to perform additional unsupervised training. However, we conduct experiments to show that the boundary points selected based on pseudo-classes are **consistent** with the actual boundaries, and the boundaries selected based on pre-trained features are consistent with using fully-trained Oracle models. The experiments results will be detailed in the following offical comments due to the characters limitations.
>***Q2: Full fine-tuning may not always be the optimal approach? Linear probing or even nearest-neighbor classifiers can yield better results. Specifically, does BiLAF remain necessary for selecting high-quality data points with more suitable fine-tuning methods?***
**R2:** Thank you for your insightful comments and suggestions. Following your ideas, we experimented with Linear Probing and KNN on the CIFAR10 dataset and compared the results across Random, ActiveFT, and **BiLAF(ours)**.
| CIFAR10 | 0.5% | 1% | 2% | 5% |
|-------------|------|------|------|------|
| **KNN** |
| Random | 82.1 | 86.7 | 88.2 | 90.8 |
| ActiveFT | 86.8 | 87.2 | 88.5 | 91.7 |
| BiLAF | 85.7 | 87.4 | 88.7 | 91.8 |
| **Linear** |
| Random | 85.1 | 87.6 | 90.1 | 92.5 |
| ActiveFT | **87.8** | 88.7 | 90.6 | 92.8 |
| BiLAF | 86.5 | 89.1 | 91.0 | 93.0 |
| **Finetune**|
| Random | 77.3 | 82.2 | 88.9 | 94.8 |
| ActiveFT | 85.0 | 88.2 | 90.1 | 95.2 |
| BiLAF | 81.0 | **89.2** | **92.5** | **95.7** |
Based on these results, we have supplemented our experiments with 5% budget. From the table, the following conclusions can be drawn:
- At extremely low data volumes, Linear Probing and KNN outperform full fine-tuning.
- As the data volume increases, the performance improvements of Linear Probing and KNN start to slow down, which gradually **necessitates the use of full fine-tuning**.
- Interestingly, the quality of data selected by different methods shows a consistent trend across KNN, Linear, and full fine-tuning. **Our method, compared to competitors, is able to select more suitable data, which is effective across different fine-tuning paradigms.**
- Following prior work, our original focus was on full fine-tuning. We appreciate your comments, which have helped enhance the comprehensiveness of our work.
>***Q3: Why BiLAF is faster than ActiveFT if it needs to run ActiveFT first?***
**R3:** The primary reason is that ActiveFT in our approach is utilized solely for selecting a core sample set, not the entire dudget $B$ as 2%, 5%, 10%. In our CIFAR100 experiments, we fixed the core number as 0.5%. Therefore, the time consumption for BiLAF mainly comprises the time ActiveFT takes to select 0.5% of the samples and the time for further boundary sample selection, which can be faster than directly selecting all samples using ActiveFT. For example, when B=2%, the denoising process takes $5.98$ seconds and iterative boundary sample selection takes $4.12$ seconds, with additional time allocated to ActiveFT tasks selecting core points and other operations. A significant portion of the time, including the denoising, is independent of $B$. When $B$ is small, the impact of boundary sample selection based on $B$ is minimal, resulting in a negligible increase in runtime for our method across different $B$ values. For a comprehensive time complexity analysis, please see Appendix D.
>***Q4: Specifically, does it help more on a pseudo class with more neighboring classes while helping less for those on the border?***
**R4:** In an ideal scenario, if a central representative is identified for each class, the class on the border would select samples closer to other classes rather than samples farther away, as demonstrated in Figure 1. For instance, as shown in Figure 3, the green class would select adjacent samples from the blue, pink, orange, or even light blue categories, thereby promoting diversity at these boundaries. Such selections can significantly enhance the distinguishability of boundary classes from others, demonstrating that the 'opponent penalty to encourage diversity across boundaries' is a **universally applicable strategy**. However, in light of your concerns and upon further reflection, we recognize that as the number of boundary selections increases, the advantages for border classes may not be as pronounced as for central classes, which may require more samples to depict their more complex and varied boundaries accurately. Optimizing the distribution of boundary points based on the positions of the centers presents a compelling and valuable new direction for future research.
Thank you once again for your valuable time and insightful comments, which have greatly enhanced our work. We hope our responses have addressed your concerns and demonstrated the versatility of our method. We look forward to further discussions and sincerely appreciate it if you could reconsider your rating.
---
Rebuttal 2:
Title: Supplementary Experimental Results
Comment: >***Experiments For Q1.***
We conducted linear probing on all the samples with true labels using features from both the pre-trained model and the oracle model (fine-tuned on all samples). We analyzed whether samples selected using different methods—Random, ActiveFT, BiLAF (ours)—tend to be near the decision boundaries. We used two metrics for this analysis: 1) **Entropy**, where a higher value indicates greater uncertainty and a propensity towards boundary samples. 2) **Prob_Diff** (probability difference between the top two classes) calculated as the difference between the highest and second-highest probabilities. A smaller value indicates that the sample is closer to the boundary between these two classes.
|CIFAR10(Pretrain)|Selected Nums|Entropy $\uparrow$|Prob_Diff$\downarrow$|Entropy(Top50)$\uparrow$|Prob_Diff(Top50)$\downarrow$|
|-|-|-|-|-|-|
|Random|250|0.0935|0.9486|0.4260|0.7536|
|ActiveFT|250|0.0424|0.9747|0.2073|0.8743|
|BiLAF(ours)|250|**0.1023**|**0.9366**|**0.4769**|**0.6924**|
|Random|500|0.0960|0.9416|0.7022|0.5056|
|ActiveFT|500|0.0433|0.9763|0.3690|0.7799|
|BiLAF(ours)|500|**0.1149**|**0.9273**|**0.7089**|**0.4500**|
|Random|1000|0.0955|0.9430|0.9181|0.3081|
|ActiveFT|1000|0.0849|0.9495|0.8810|0.3560|
|BiLAF(ours)|1000|**0.1461**|**0.9064**|**1.0262**|**0.2005**|
|CIFAR100(Pretrain)|Selected Nums|Entropy $\uparrow$|Prob_Diff$\downarrow$|Entropy(Top50)$\uparrow$|Prob_Diff(Top50)$\downarrow$|
|-|-|-|-|-|-|
|Random|500|**0.6240**|**0.7295**|**2.2516**|**0.0876**|
|ActiveFT|500|0.2962|0.8664|1.5663|0.2369|
|BiLAF(ours)|500|0.3933|0.8167|1.7832|0.1423|
|Random|1000|**0.5317**|**0.7766**|2.2375|0.0594|
|ActiveFT|1000|0.3650|0.8430|2.0812|0.0833|
|BiLAF(ours)|1000|0.4751|0.7815|**2.2606**|**0.0542**|
|Random|2500|0.5253|0.7749|2.6790|0.0232|
|ActiveFT|2500|0.4851|0.7936|2.6653|0.0206|
|BiLAF(ours)|2500|**0.5795**|**0.7476**|**2.7196**|**0.0192**|
|Random|5000|0.5442|0.7652|2.8791|0.0151|
|ActiveFT|5000|0.5197|0.7768|2.8487|0.0118|
|BiLAF(ours)|5000|**0.6219**|**0.7336**|**2.9194**|**0.0090**|
|CIFAR10(Oracle)|Selected Nums|Entropy $\uparrow$|Prob_Diff$\downarrow$|Entropy(Top50)$\uparrow$|Prob_Diff(Top50)$\downarrow$|
|-|-|-|-|-|-|
|Random|250|0.001512|0.999831|0.002437|0.999729|
|ActiveFT|250|0.001521|0.999830|0.002403|0.999717|
|BiLAF(ours)|250|**0.001541**|**0.999827**|**0.002473**|**0.999714**|
|Random|500|0.001483|0.999835|0.002782|0.999676|
|ActiveFT|500|0.001542|0.999828|0.002958|0.999644|
|BiLAF(ours)|500|**0.001605**|**0.999820**|**0.003020**|**0.999641**|
|Random|1000|0.001551|0.999823|0.003543|0.999575|
|ActiveFT|1000|0.001527|0.999829|0.003472|0.999586|
|BiLAF(ours)|1000|**0.001598**|**0.999820**|**0.003588**|**0.999567**|
|CIFAR100(Oracle)|Selected Nums|Entropy $\uparrow$|Prob_Diff$\downarrow$|Entropy(Top50)$\uparrow$|Prob_Diff(Top50)$\downarrow$|
|-|-|-|-|-|-|
|Random|500|**0.049238**|**0.992216**|**0.176811**|**0.956287**|
|ActiveFT|500|0.040140|0.993335|0.124594|0.962662|
|BiLAF(ours)|500|0.042792|0.994406|0.137114|0.965457|
|Random|1000|**0.047730**|0.993905|0.202417|0.963037|
|ActiveFT|1000|0.042763|0.993913|0.198896|0.961337|
|BiLAF(ours)|1000|0.045447|**0.993854**|**0.203469**|**0.960462**|
|Random|2500|0.046654|0.993407|0.297133|0.919803|
|ActiveFT|2500|0.047036|0.993245|0.303767|0.919122|
|BiLAF(ours)|2500|**0.047357**|**0.993103**|**0.309907**|**0.917868**|
|Random|5000|0.047028|0.993741|0.407092|0.897825|
|ActiveFT|5000|0.045173|0.994264|0.340326|0.925870|
|BiLAF(ours)|5000|**0.049476**|**0.992885**|**0.466601**|**0.842029**|
Therefore, the conclusion is that our method effectively selects boundary samples, maintaining consistency across models with various capabilities. Consequently, as the quality of model features improves, training with our selected high-quality data can also yield consistent improvements.
---
Rebuttal 3:
Title: Looking Forward to Further Feedbacks
Comment: Thank you for your insightful suggestions. We believe we have comprehensively addressed your questions concerning the impacts of various pre-training features, fine-tuning paradigms, and the opponent penalty design.
It is worth noting that our method effectively identifies high-quality samples that significantly enhance model training across various pre-training features and tasks. These samples facilitate best accuracy across different fine-tuning paradigms. Our approach is fast, efficient, and generalizable.
We are wondering whether you have any additional questions or comments regarding our response to your review comments. We will do our best to address them.
We appreciate your time and effort reviewing our manuscript. Thanks for your consideration!
---
Rebuttal Comment 3.1:
Comment: Thanks to the authors for their detailed rebuttal. As it has addressed my concerns, I will raise my score from 4 to 5.
---
Reply to Comment 3.1.1:
Comment: Thank you for your positive feedback and the increased rating! We are deeply grateful for your review, which has greatly assisted us in supplementing and perfecting our paper. We will incorporate your suggestions and these new findings into the revised version. | null | null | null | null | null | null |
TableRAG: Million-Token Table Understanding with Language Models | Accept (poster) | Summary: This paper primarily concentrates on the scalability challenges associated with encoding entire tables as input for LLM reasoning. It introduces a retrieval-augmented generation (RAG) framework, named TableRAG, which utilizes query expansion along with schema and cell retrieval to identify essential information effectively. Furthermore, this study contributes to establishing new benchmarks for million-token-sized tables, significantly enhancing the understanding of LLM performance in large-scale table analysis.
Strengths: - the idea of schema-cell retrieval seems reasonable and shows better performance compared to baselines on some types of questions.
- this paper seems contribute to new benchmarks concerning million-token sized tables and enhances the understanding of LLM performance in large-scale table comprehension.
Weaknesses: - while the idea of schema-cell retrieval seems reasonable, the generalizability of the method is not very clear, especially for some questions that cannot be directly addressed by manipulating the tables (also noted in the limitation section)
- the experiments only consider GPT-3.5-turbo and Gemini-Pro as LLMs which are limited, further testing on more open-sourced and close-sourced models is necessary to prove the consistency of the proposed method.
- the performance in Table 4 appears effective solely for the ArcadeQA dataset, with the retrieval of schema and cells proving useful. However, for BirdQA, the enhancements seem limited. I'm little concerned about the method consistency when adding more datasets.
Technical Quality: 3
Clarity: 3
Questions for Authors: - for figure 8, I am slightly confused by the results obtained from the solver. It appears the query intends to request the average price for all types of wallets. Yet, in the provided example, the third action only yields the average price of leather wallets (leather_wallet_prices.means()), rather than the average for all wallets (wallet_prices.means()). I'm uncertain if this discrepancy stems from a misrepresentation or if there's potential confusion in the example.
- for controlling the budget in cell retrieval, how does the paper manage a column like "description," which consists of long sentences that might exhaust the entire budget, preventing other cells from being processed?
- for schema/cell retrieval, if the query expansion driven by LLMs fails to generate keywords that align with the schema/cells in the table, what will the outcomes be, or are there any solutions proposed in the paper? Additionally, I am curious about how the paper attempts to align the schema mentioned in the table with the knowledge from LLMs themselves, or does it simply rely on the generalizability of LLMs?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: - the proposed method seems to primarily support the questions that can be answered by manipulating the tables, such as "What is the average price of the wallet?" Here, queries can be transformed into specific table operations (e.g., df['item_total'] = df['item_total'].str.replace('$','').str.replace(',','').astype(float)). However, it may not perform as well with more complex reasoning tasks. For instance, in the Tabfact dataset, where one must determine from a Wikipedia table and its caption which statements are supported or contradicted, the method might not show promising results for such reasoning tasks.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ## Generalizability
We appreciate the reviewer's question regarding the generalizability of our schema-cell retrieval method. It's important to note that our work focuses on general TableQA, which often involves complex understanding and reasoning beyond simple table manipulation, as shown in the TabFact example in the general response. This distinguishes our approach from Text2SQL methods, which have a narrower scope.
The generalizability of TableRAG is inherently tied to the capabilities of the underlying solver agent. TableRAG can be easily integrated with any downstream LLM-based solver agents. In our implementation, we utilize ReAct, a powerful LLM-based solver with demonstrated effectiveness and generalizability in TableQA tasks [1, 2]. TableRAG's primary role is to efficiently and accurately provide the necessary information to the solver, enabling it to perform the required reasoning and extract relevant details.
1. [Rethinking Tabular Data Understanding with Large Language Models](https://aclanthology.org/2024.naacl-long.26) (Liu et al., NAACL 2024)
2. [ReAcTable: Enhancing ReAct for Table Question Answering](https://dl.acm.org/doi/10.14778/3659437.3659452) (Zhang et al., VLDB 2024)
## Evaluation on more models
We follow reviewer’s suggestion and include results obtained using Mistral Nemo, as shown in the general response.
The results demonstrate that TableRAG remains the most effective table prompting technique when working with Mistral Nemo, consistent with our previous findings using GPT 3.5 and Gemini 1.0 Pro.
## Ablation study
Thanks for the insightful feedback. The accuracy improvement on BirdQA ranges from 0.7% to 2%, which is in line with the improvements observed in prior work such as Binder[3] and Dater[4].
We observe an even more substantial accuracy boost on ArcadeQA, ranging from 2.3% to 6.6%. This is likely attributable to the larger scale of the input in this dataset. As detailed in Table 5, ArcadeQA has approximately twice the average number of columns and cells compared to BirdQA, suggesting that cell retrieval is particularly beneficial when handling larger tables. Nonetheless, both schema and cell retrieval components enhance performance across all tested datasets, and they contribute orthogonal advantages to the overall improvement.
3. [Binding Language Models in Symbolic Languages](https://openreview.net/forum?id=lH1PV42cbF) (Cheng et al., ICLR 2023)
4. [Large Language Models are Versatile Decomposers: Decomposing Evidence and Questions for Table-based Reasoning](https://dl.acm.org/doi/10.1145/3539618.3591708) (Ye et al., SIGIR 2024)
## Figure 8
Thanks for pointing out the typo. The variable `leather_wallet_prices` should indeed be `wallet_prices` as obtained in the previous step. We will correct this in the final version.
## Description column
Thank you for raising this important point about handling long text columns within our encoding budget. We acknowledge this limitation and designed TableRAG to mitigate its impact in the following ways:
1. **Prioritization of Categorical Columns**: Our frequency-based encoding prioritizes columns with more distinct categorical values, ensuring that critical information for indexing is processed first. This helps reduce the likelihood of long text columns exhausting the entire budget.
2. **Solver Agent's String Matching Capability**: Even if the retrieval doesn't cover all cells in long text columns like "description" or "address," our solver agent (ReAct in our case) can still perform basic string matching within those columns to identify relevant information. As demonstrated in Figure 8, the solver agent can detect the existence of the “description” column through retrieval and then use basic string matching techniques (e.g., `df['description'] .str.contains('Wallet')`) to pinpoint the precise cell within those columns.
3. **Empirical Evidence from the Study of Encoding Budget**: The ablation study in Figure 6 shows that TableRAG is robust to changes in the encoding budget. We varied the encoding budget from 100 to 10,000 and found that TableRAG's performance remained near optimal on both ArcadeQA and BirdQA. In contrast, Row-Column Retrieval is sensitive to the encoding budget, with performance significantly dropping when the encoding budget is either increased or decreased. This study validates the earlier statement that even if not all relevant cells are retrieved, the solver agent can still obtain the necessary information through programs using the retrieved information in most cases.
While we recognize that there might be more optimized ways to handle long text columns, our current work prioritizes the implementation and evaluation of query expansion and schema-cell retrieval, which we believe are foundational for scaling LM-based TableQA. Addressing the specific challenge of long text columns will be a focus of our future research.
## False alignment
Thank you for raising these important questions about the robustness of our retrieval and the alignment between schema and LLM knowledge.
**Handling Query Expansion Mismatches**: In TableRAG, we employ embedding-based retrieval, which inherently captures semantic similarities. Even if the LLM-generated keywords don't perfectly match the schema or cell values, the retrieval process will still return the most semantically relevant ones. This provides a degree of robustness against potential mismatches in query expansion.
**Schema Alignment and LLM Generalizability**: TableRAG, having access to both the query and the retrieved context, is expected to leverage its understanding of language and world knowledge to determine which information is pertinent and how to combine it to solve the problem at hand. While we don't explicitly enforce schema alignment within the LLM, we rely on its inherent ability to reason and connect information, even when the schema terms might not have been directly encountered during pre-training.
---
Rebuttal Comment 1.1:
Comment: Thank you for addressing my concerns. I'm now willing to increase my rating from 4 to 5. I think the contributions are clear, and reasonable, including (1) schema-cell retrieval and (2) new benchmark considering million-token size tables. Cheers!
---
Reply to Comment 1.1.1:
Comment: Thank you for recognizing the contribution of this work and increasing the review rating! We appreciate your valuable feedback and will incorporate it into the final version. | Summary: The paper introduces TableRAG, a framework that enhances LM-based table understanding by incorporating query expansion and retrieval mechanisms. TableRAG aims to improve the performance of large-scale table understanding tasks by efficiently encoding data and utilizing precise retrieval techniques. The framework achieves state-of-the-art results on various benchmarks and datasets.
Strengths: 1. The overall method is relatively simple and intuitive.
2. Experimental results on three datasets show the effectiveness of the proposed method.
Weaknesses: 1. The contribution of the paper is relatively small. The author mainly builds upon the original Row Column Retrieval and further reduces the retrieval cost by Scheme Cell Retrieval. The original work has included column selection, but this paper only further reduces the returned row to only return specific cells.
2. The baseline of the experimental comparison is relatively weak, and the advantage of finer grained retrieval is obvious.
Technical Quality: 3
Clarity: 3
Questions for Authors: N/A
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ## Contribution
> The contribution of the paper is relatively small. The author mainly builds upon the original Row Column Retrieval and further reduces the retrieval cost by Scheme Cell Retrieval. The original work has included column selection, but this paper only further reduces the returned row to only return specific cells.
We thank the reviewer for the constructive feedback. We would like to emphasize that the existing Row-Column Retrieval requires encoding all rows and columns, which is infeasible for large-scale tables without significant truncation. Our work directly addresses this critical limitation and is significantly differentiated in 3 aspects:
1. **New Benchmarks**: We introduce two real-world large-scale TableQA benchmarks, ArcadeQA and BirdQA, filling a critical gap in the field. These benchmarks enable rigorous evaluation of TableQA systems on tables with millions of tokens, pushing the boundaries of research in this domain.
2. **Scalable Framework**: We propose TableRAG, a comprehensive framework that combines novel techniques such as query expansion, frequency-based filtering, and schema-cell retrieval. This framework enables LM-based TableQA to scale to tables with millions of tokens, a significant advancement over existing methods.
3. **Rigorous Evaluation**: We provide thorough complexity analysis and extensive empirical studies, demonstrating TableRAG's effectiveness and efficiency. Our work paves the way for broader applications of TableQA in real-world scenarios involving large tables.
We will make these clearer in the final version.
## Baselines
> The baseline of the experimental comparison is relatively weak, and the advantage of finer grained retrieval is obvious.
To the best of our knowledge, TableRAG represents the first dedicated effort to address LM-based large-scale table understanding. We are eager to include any specific baselines the reviewer may recommend that we might have overlooked. While we acknowledge the effectiveness of existing methods on smaller tables, our analysis and studies clearly demonstrate their limitations in scaling due to context length constraints. Furthermore, we have shown that previous attempts at scalable retrieval methods for tabular data encountered inherent limitations. Our carefully designed TableRAG framework overcomes these challenges, achieving superior performance and scalability, as validated through comprehensive complexity analysis and extensive ablation studies.
To better understand TableRAG’s performance compared against state-of-the-art baselines, perform experiments on WikiTableQA with GPT 3.5 as shown below:
| Method | Accuracy |
|-------------------------------------|:---------:|
| TaBERT (Yin et al., 2020) | 52.3 |
| Text-to-SQL (Rajkumar et al., 2022) | 52.9 |
| Binder (Cheng et al., 2022) | 56.74 |
| Dater (Ye et al., 2023) | 52.81 |
| **TableRAG (Ours)** | **57.03** |
In this benchmark, the entire table content often fits within the context window of the language models, which is not representative of real-world large-scale industrial scenarios.
Nevertheless, the results clearly indicate that TableRAG maintains its superior performance even on smaller tables. It's important to highlight that the baseline methods require that LLMs process the entire table, thus limiting their scalability for large tables.
---
Rebuttal Comment 1.1:
Comment: We've submitted our rebuttal, and Reviewer aMG9 has already found it helpful enough to increase their rating. We're hoping you might also get a chance to check it out before the deadline in two days – we'd love to hear your thoughts!
---
Reply to Comment 1.1.1:
Comment: Dear reviewer, we're hoping you might get a chance to check out the rebuttal before the deadline today. We'd love to get your feedback! | Summary: This paper introduces a novel approach to retrieving cell values and database schema within the table question answering domain. The method operates as follows:
1) It expands the initial question by identifying potential queries for column retrieval or cell value retrieval.
2) For each query generated in the first step, the method identifies the top-K columns using their embeddings.
3) Within a given encoding budget for cell retrieval, it proposes embedding the most frequent B distinct values for each column and retrieves them based on their similarity to each query.
Additionally, the authors have developed two new datasets for large-scale table question answering. Their approach demonstrates improved cost efficiency and effectiveness on these large-scale datasets compared to previous methods.
Strengths: 1) The proposed approach significantly outperforms the baseline models, particularly in retrieving cell values, showing a substantial improvement in recall metric.
2) Unlike previous works in the table question answering domain, which focused on small-scale tables that do not accurately represent real-world databases, this approach tackles a more challenging setting. It addresses scenarios where tables can contain millions of rows and numerous columns. The authors also developed two new datasets specifically designed for large-scale table question answering.
3) The paper includes well-conducted ablation studies and experiments that compare their method against their baseline methods. These studies demonstrate the effectiveness of each component of their approach.
Weaknesses: 1) My primary concern pertains to the comprehensiveness of the baselines used. While the paper correctly mentions that the table schema is predominantly considered in the text-to-SQL domain, cell values also play a crucial role in SQL generation. Notably, papers like CodeS [1] have proposed using BM25 retrieval to find cell values, which would serve as an excellent baseline for comparison. Additionally, schema linking—the process of identifying the correct rows and columns—is well-explored. Approaches proposed in studies like TaBERT [2] could also be utilized as baselines.
2) Given an encoding budget, the strategy of retaining only the most frequent distinct cell values seems problematic, especially for queries that require data from columns containing names, addresses, etc. Heuristic-based methods, such as those using syntactic similarity with edit distance, can effectively filter most of the cell values and might be more appropriate.
[1]: CodeS: Towards Building Open-source Language Models for Text-to-SQL
[2]: TaBERT: Pretraining for Joint Understanding of Textual and Tabular Data
Technical Quality: 3
Clarity: 3
Questions for Authors: Not applicable
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: Not applicable
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ## Baselines
> Papers like CodeS [1] have proposed using BM25 retrieval to find cell values, which would serve as an excellent baseline for comparison. Additionally, schema linking—the process of identifying the correct rows and columns—is well-explored. Approaches proposed in studies like TaBERT [2] could also be utilized as baselines.
We thank the reviewer for the valuable references. It's important to note that our work focuses on general TableQA, which often involves complex understanding and reasoning beyond simple table manipulation. This distinguishes our approach from Text2SQL methods, which have a narrower scope. While CodeS primarily focuses on Text2SQL, its BM25 retrieval approach offers a potential enhancement to our TableRAG framework. Thus, we implement BM25 and [hybrid retrieval](https://python.langchain.com/v0.1/docs/modules/data_connection/retrievers/ensemble/) as a replacement for embedding-based retrieval to better understand the impact of different retrieval methods.
| Method | | ArcadeQA | | | BirdQA | |
|----------------------|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|
| | small | large | full | small | large | full |
| ReadTable | 18.2 | 0.0 | 4.6 | 36.4 | 0.0 | 9.1 |
| ReadSchema | 48.5 | 41.2 | 43.1 | 49.4 | 37.2 | 40.3 |
| RandRowSampling | 42.4 | 40.2 | 42.3 | 49.4 | 29.9 | 34.7 |
| RowColRetrieval | 39.4 | 37.1 | 37.7 | 49.4 | 36.4 | 39.6 |
| TableRAG (BM25) | 45.5 | 35.1 | 37.7 | 46.8 | 32.0 | 35.7 |
| TableRAG (Hybrid) | 51.5 | 46.4 | 46.2 | 54.5 | 41.1 | 44.5 |
| **TableRAG (Embed)** | **54.5** | **47.4** | **49.2** | **55.8** | **42.0** | **45.5** |
The results demonstrate that TableRAG with embedding-based retrieval still outperforms BM25 and hybrid retrieval consistently.
Regarding TaBERT and other LM-based methods, their effectiveness is indeed constrained by the context window of language models, making them less suitable for large-scale tables. Nevertheless, we recognize the benefit of comparing TableRAG against representative baselines, and we evaluate TableRAG on additional WikiTableQA benchmark with GPT 3.5 as shown below:
| Method | Accuracy |
|-------------------------------------|:---------:|
| TaBERT (Yin et al., 2020) | 52.3 |
| Text-to-SQL (Rajkumar et al., 2022) | 52.9 |
| Binder (Cheng et al., 2022) | 56.74 |
| Dater (Ye et al., 2023) | 52.81 |
| **TableRAG (Ours)** | **57.03** |
In this benchmark, the entire table content often fits within the context window of the language models, which is not representative of real-world large-scale industrial scenarios.
Nevertheless, the results clearly indicate that TableRAG maintains its superior performance even on smaller tables. It's important to highlight that the baseline methods require that LLMs process the entire table, thus limiting their scalability for large tables.
## Limitation of retaining most frequent distinct cell values
> Given an encoding budget, the strategy of retaining only the most frequent distinct cell values seems problematic, especially for queries that require data from columns containing names, addresses, etc. Heuristic-based methods, such as those using syntactic similarity with edit distance, can effectively filter most of the cell values and might be more appropriate.
We appreciate the reviewer highlighting this potential limitation of the cell value filtering strategy. We were indeed aware of this trade-off during the design process and opted for LM-based retrieval with an encoding budget for the following reasons:
1. **Semantic Understanding vs. Speed**: While heuristic-based methods offer computational efficiency, they often fail to capture subtle semantic relationships that are crucial for accurate retrieval in complex TableQA tasks. We have presented results from replacing embedding-based retrieval with BM25 and a hybrid approach above. While BM25 and the hybrid approach excel at retrieving all cells from the table, their inferior semantic understanding capabilities result in poorer overall performance.
2. **Retrieval as Guidance, not the Final Answer**: Even with a limited encoding budget, our retrieval mechanism effectively guides the solver agent (PyReAct in our case) to the relevant columns. As demonstrated in Figure 8, the solver agent can detect the existence of the “description” column through retrieval and then use basic string matching techniques (e.g., `df['description'] .str.contains('Wallet')`) to pinpoint the precise cell within those columns. Thus, while retrieval may not always return all relevant cells, it remains crucial for directing the solver's attention to the right areas of the table.
3. **Empirical Evidence from the Study of Encoding Budget**: The ablation study in Figure 6 shows that TableRAG is robust to changes in the encoding budget. We varied the encoding budget from 100 to 10,000 and found that TableRAG's performance remained near optimal on both ArcadeQA and BirdQA. In contrast, Row-Column Retrieval is sensitive to the encoding budget, with performance significantly dropping when the encoding budget is either increased or decreased. This study validates the earlier statement that even if not all relevant cells are retrieved, the solver agent can still obtain the necessary information through programs using the retrieved information in most cases.
We will make these clearer in the final version.
---
Rebuttal Comment 1.1:
Comment: We've submitted our rebuttal, and Reviewer aMG9 has already found it helpful enough to increase their rating. We're hoping you might also get a chance to check it out before the deadline in two days – we'd love to hear your thoughts!
---
Reply to Comment 1.1.1:
Comment: Dear reviewer, we're hoping you might get a chance to check out the rebuttal before the deadline today. We'd love to get your feedback! | Summary: The paper introduces TableRAG, a novel framework that improves LM-based table understanding by incorporating advanced query expansion and retrieval mechanisms. TableRAG addresses the critical scalability challenges associated with large-scale tables by efficiently encoding data and employing precise retrieval techniques. Through a comprehensive evaluation on self-constructed benchmarks sourced from real-world datasets and synthetic data from TabFact, TableRAG demonstrates superior performance and reduced token consumption across various table sizes.
Strengths: 1. **Scalability**: The framework is scalable for larger tables, as demonstrated by new benchmarks sourced from real-world datasets and synthetic data from TabFact.
2. **State-of-the-Art Performance**: TableRAG achieves the highest retrieval quality and new state-of-the-art performance on large-scale table understanding tasks.
Weaknesses: 1. **Lack of Novelty** Many papers about RAG reveals that the accuracy of the evidence is crucially important for the performance, and the idea of this paper is to is another practice in the TableQA domain.
2. **Lack of evaluation robustness** The paper only evaluates the model on closed models gpt-3.5 and gemini, but fails to evaluate on open models like Mistral, Llama etc.
Technical Quality: 2
Clarity: 3
Questions for Authors: 1. Why not evaluate on more models?
2. Why not evaluation on commonly used TableQA benchmarks like WikiTableQA?
Confidence: 4
Soundness: 2
Presentation: 3
Contribution: 3
Limitations: Yes, the paper discusses the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ## Novelty
>Many papers about RAG reveals that the accuracy of the evidence is crucially important for the performance, and the idea of this paper is to is another practice in the TableQA domain.
Thank you for highlighting the importance of evidence accuracy in RAG systems. While this principle is well-established for unstructured data, its application to structured data like tables remains an open question. Prior retrieval-based methods for table understanding haven't consistently outperformed non-retrieval methods, primarily due to the challenges of efficient retrieval from tabular data. Our work addresses this gap by introducing a novel cell-schema retrieval method. Additionally, since existing benchmarks only focus on small tables, we contribute two large-scale TableQA datasets to facilitate further research in this area.
## Evaluation on open models
>The paper only evaluates the model on closed models gpt-3.5 and gemini, but fails to evaluate on open models like Mistral, Llama etc.
>Why not evaluate on more models?
>Why not evaluation on commonly used TableQA benchmarks like WikiTableQA?
We follow reviewer’s suggestion and include results obtained using Mistral Nemo, the latest model with a 128K context length from Mistral, as shown below.
| Method | | ArcadeQA | | | BirdQA | |
|-----------------|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|
| | small | large | full | small | large | full |
| ReadTable | 21.2 | 0.0 | 5.4 | 33.8 | 0.0 | 8.4 |
| ReadSchema | 36.4 | 30.9 | 32.3 | 41.6 | 33.8 | 35.7 |
| RandRowSampling | 45.5 | 22.7 | 28.5 | 49.4 | 28.1 | 33.4 |
| RowColRetrieval | 39.4 | 26.8 | 30.0 | 48.1 | 32.5 | 36.4 |
| **TableRAG** | **51.5** | **44.3** | **46.2** | **53.2** | **42.4** | **45.1** |
The results demonstrate that TableRAG remains the most effective table prompting technique when working with Mistral Nemo, consistent with our previous findings using GPT 3.5 and Gemini 1.0 Pro. Furthermore, even with its 128K context length, Mistral Nemo struggles to handle large tables in ArcadeQA and BirdQA.
## Evaluation on WikiTableQA
> Why not evaluation on commonly used TableQA benchmarks like WikiTableQA?
In our paper, we have evaluated TabFact, a commonly used TableQA benchmark, and extended it to studying the behavior of TableRAG at different scales. In addition, we follow the reviewer's suggestion and perform experiments on WikiTableQA with GPT 3.5 as shown below:
| Method | Accuracy |
|-------------------------------------|:---------:|
| TaBERT (Yin et al., 2020) | 52.3 |
| Text-to-SQL (Rajkumar et al., 2022) | 52.9 |
| Binder (Cheng et al., 2022) | 56.74 |
| Dater (Ye et al., 2023) | 52.81 |
| **TableRAG (Ours)** | **57.03** |
In this benchmark, the entire table content often fits within the context window of the language models, which is not representative of real-world large-scale industrial scenarios.
Nevertheless, the results clearly indicate that TableRAG maintains its superior performance even on smaller tables. It's important to highlight that the baseline methods require that LLMs process the entire table, thus limiting their scalability for large tables.
---
Rebuttal Comment 1.1:
Comment: We've submitted our rebuttal, and Reviewer aMG9 has already found it helpful enough to increase their rating. We're hoping you might also get a chance to check it out before the deadline in two days – we'd love to hear your thoughts! | Rebuttal 1:
Rebuttal: ## General Response
We thank the reviewers for their constructive feedback. They found our solution addresses the critical scalability challenge with superior performance in comprehensive evaluations. The work contributes to a better understanding of LLM capabilities in large-scale table analysis.
Our primary focus is to improve the scalability of LM-based table understanding. We achieve this through TableRAG, a novel table-specific retrieval technique incorporating query expansion, schema-cell retrieval, and frequency-based filtering. Additionally, we introduce ArcadeQA and BirdQA, two large-scale TableQA benchmarks specifically designed to evaluate and advance our understanding of LLM performance on large-scale tables.
We carefully address individual questions below and welcome further discussions of this work.
## Additional LLM evaluation
In addition to GPT 3.5 and Gemini 1.0 Pro, we have extended our evaluations to include Mistral Nemo, the latest model from Mistral with a 128K context window.
| Method | | ArcadeQA | | | BirdQA | |
|-----------------|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|
| | small | large | full | small | large | full |
| ReadTable | 21.2 | 0.0 | 5.4 | 33.8 | 0.0 | 8.4 |
| ReadSchema | 36.4 | 30.9 | 32.3 | 41.6 | 33.8 | 35.7 |
| RandRowSampling | 45.5 | 22.7 | 28.5 | 49.4 | 28.1 | 33.4 |
| RowColRetrieval | 39.4 | 26.8 | 30.0 | 48.1 | 32.5 | 36.4 |
| **TableRAG** | **51.5** | **44.3** | **46.2** | **53.2** | **42.4** | **45.1** |
- TableRAG outperforms all baselines, consistent with previous results observed in GPT 3.5 and Gemini 1.0 Pro.
- Despite their large 128K context window, LLMs continue to encounter challenges when dealing with large-scale tables. These challenges are largely mitigated by the proposed TableRAG
## Evaluation on common TableQA benchmark
We follow reviewer’s suggestion and evaluate TableRAG with GPT 3.5 on the additional WikiTableQA benchmark, comparing its performance against representative baselines:
| Method | Accuracy |
|-------------------------------------|:---------:|
| TaBERT (Yin et al., 2020) | 52.3 |
| Text-to-SQL (Rajkumar et al., 2022) | 52.9 |
| Binder (Cheng et al., 2022) | 56.74 |
| Dater (Ye et al., 2023) | 52.81 |
| **TableRAG (Ours)** | **57.03** |
- While TableRAG is primarily designed to address the challenges of large-scale tables, it also exhibits superior performance on smaller tables.
## Evaluation of Different Retrieval Methods
In response to the questions about the limited encoding budget of embedding-based retrieval methods, we follow the reviewer’s suggestion and implement BM25 and hybrid retrieval.
| Method | | ArcadeQA | | | BirdQA | |
|----------------------|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|
| | small | large | full | small | large | full |
| ReadTable | 18.2 | 0.0 | 4.6 | 36.4 | 0.0 | 9.1 |
| ReadSchema | 48.5 | 41.2 | 43.1 | 49.4 | 37.2 | 40.3 |
| RandRowSampling | 42.4 | 40.2 | 42.3 | 49.4 | 29.9 | 34.7 |
| RowColRetrieval | 39.4 | 37.1 | 37.7 | 49.4 | 36.4 | 39.6 |
| TableRAG (BM25) | 45.5 | 35.1 | 37.7 | 46.8 | 32.0 | 35.7 |
| TableRAG (Hybrid) | 51.5 | 46.4 | 46.2 | 54.5 | 41.1 | 44.5 |
| **TableRAG (Embed)** | **54.5** | **47.4** | **49.2** | **55.8** | **42.0** | **45.5** |
- Although BM25 is faster and capable of retrieving all cells, its lack of semantic understanding makes embedding-based retrieval a superior choice for TableRAG. For a more detailed discussion, please refer to our response to Reviewer yX7T. | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Equivariant Neural Diffusion for Molecule Generation | Accept (poster) | Summary: This paper presents Equivariant Neural Diffusion (END), a novel diffusion model for 3D molecule generation. The major novelty of END over previous molecule diffusion models lies in adopting a learnable forward process based on neural flow diffusion models. Experiments show that END can achieve good performance on benchmark datasets.
Strengths: - Successfully incorporate neural flow diffusion models into the equivariant 3D molecule generation framework and demonstrate its usefulness in experiments.
- Generally good, clear and well-organized writing.
Weaknesses: - The novelty contribution of the proposed END method is not high, as END is largely a combination of EDM [1] and neural flow diffusion models. Particularly, the paper does not give a clear discussion or analysis about why adopting learnable forward diffusion process is useful and beneficial to 3D molecule generation, or what molecular structures can be additionally captured by END through learnable forward diffusion process compared with previous diffusion models.
- There already exist some SDE based 3D molecule generation methods like EDM-BRIDGE [2] and EEGSDE [3]. Authors are encouraged to highlight the key difference in the diffusion process between END and these methods.
- Compared with GEOLDM, END does not show better performance (Table 1 and 2), which weakens the claim about the advantages of using learnable forward process. Since the main evaluation metrics proposed by EDM [1] in 3D molecule generation are saturating in recent literatures, authors are encouraged to adopt metrics proposed by HierDiff [4] to better evaluate the quality of generated 3D molecules.
[1] Equivariant Diffusion for Molecule Generation in 3D. ICML 2022.
[2] Diffusion-based Molecule Generation with Informative Prior Bridges. NeurIPS 2022.
[3] Equivariant Energy-Guided SDE for Inverse Molecular Design. ICLR 2023.
[4] Coarse-to-Fine: a Hierarchical Diffusion Model for Molecule Generation in 3D. ICML 2023.
-------------------Post Rebuttal---------------------
I appreciate authors' efforts in addressing my concerns and questions in rebuttal. After reading over authors' rebuttal responses and pdf, I think all my concerns have been addressed so I increased my score. I hope authors will carefully add all rebuttal updates (discussion, analysis and experiment results) to the revised version of paper in the future.
Technical Quality: 3
Clarity: 3
Questions for Authors: See Weaknesses part.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their constructive feedback.
We address the reviewer’s concerns/questions here below:
**Limited technical novelty**
As mentioned in our general rebuttal, END is indeed a combination of existing ideas. We however want to emphasize that (1) our work is the first to seek improvement by adding a learnable forward process, orthogonally to most previous work that has mostly focused on the reverse process, (2) designing an appropriate transformation was not straightforward, it required some care as to preserve the desired invariance of the learned distribution.
**Relevance of learnable forward process**
Most improvements to diffusion models for molecules have focused on designing more expressive denoisers, better noise schedules, or embedding discrete features in a continuous latent space. In END, we seek improvement in an orthogonal direction. Adding a learnable forward allows the hierarchy of latent variables to be learnt, instead of fixed as in classical diffusion models, thereby offering greater flexibility and enhanced generative modeling. We note that END can be combined with previous developments for even better performance.
This enhanced generative modelling is greatly showcased on the more challenging and realistic GEOM-Drugs, where the learnable forward greatly improves connectivity (\~20% across all sampling schemes), and leads to 3D structures of better quality as evidenced by the significantly lower strain energies.
Finally, the addition of a learnable forward process comes with two practically relevant by-products: (1) END achieves better performance with fewer integration steps, and (2) the learnable forward yields an improved conditional generative process – as demonstrated on two conditional tasks where cEND even outperforms by a significant margin a model leveraging an auxiliary model through classifier-guidance (EEGSDE) (Tables 3 and 4 in the submitted manuscript).
**Previous work**
We first note that both papers were already included as baselines / previous work in the submitted manuscript, but, as suggested by the reviewer, we will add a more thorough discussion to the final version. We summarize it here:
* EEGSDE is a continuous-time formulation of EDM, where conditional generation is performed by combining (1) a conditional model score model, (2) a method similar to classifier-guidance (requiring the training of an auxiliary model). In cEND, we instead only learn a conditional model, akin to classifier-free guidance.
* Similar to END, EDM-BRIDGE builds on the observation that there exists an infinity of processes mapping from prior to target distributions. EDM-Bridge constructs one such process that incorporates some prior knowledge, i.e. part of the drift term is a physically-inspired force term. END can be seen as a generalization of EDM-Bridge, where the forward drift term is now learned instead of pre-specified. Through experiments, we show that a learnable forward performs better than a fixed one, even when the latter is physics-inspired.
**Additional metrics**
We thank the reviewer for their suggestion. We added a citation to HierDiff \[1\] and computed additional metrics taken from the paper, namely: SAScore, QED, logP and MW (see rebuttal PDF). To further assess the quality of the generated 3D structures, we also added a metric that measures the strain energy – expressed as an energy difference between the generated geometry and its relaxation (obtained through force-field geometry optimization).
These additional metrics were computed on the valid x connected samples generated by each method. We converted the generated samples to SDF using OpenBabel, and read them in using RDKit. For the SAScore, we normalized the values between 0 and 1, with 0 being “difficult to synthesize” and 1 “easy to synthesize”.
All the additional metrics are provided in the rebuttal PDF. We summarize here the main results.
* On QM9, END shows better agreement with the data distribution (except for QED which is captured perfectly by all methods). In particular, the reduction in strain energy demonstrates that END yields better geometries than the baselines (Table 1 in rebuttal PDF).
* On GEOM-Drugs, in addition to greatly improved connectivity, the SAScore, QED and logP are shown to be in better agreement with the data distribution (Table 3 in rebuttal PDF). The geometries are also shown to be of better quality as per the significantly reduced strain energy (Table 2 in rebuttal PDF).
**Comparison to GeoLDM**
We downloaded the GEOM-Drugs checkpoint available with the official implementation of GeoLDM, and evaluated the generated samples. We added one line with the obtained results in Table 2 in the rebuttal PDF, and summarize the findings here.
While all samples generated by GeoLDM were effectively deemed “valid”, only around **46%** were connected, against nearly **83%** for END. The strain energy of the samples generated by END was also significantly lower, 55 kcal/mol vs 133 kcal/mol for GeoLDM, underscoring the advantages of a learnable forward process.
\[1\] Coarse-to-Fine: a Hierarchical Diffusion Model for Molecule Generation in 3D. ICML 2023\.
---
Rebuttal Comment 1.1:
Title: Follow-up Response
Comment: I appreciate authors' efforts in addressing my concerns and questions in rebuttal. After reading over authors' rebuttal responses and pdf, I think all my concerns have been addressed so I increased my score. I hope authors will carefully add all rebuttal updates (discussion, analysis and experiment results) to the revised version of paper in the future.
---
Reply to Comment 1.1.1:
Title: Follow-up
Comment: We want to thank the reviewer for re-evaluating our submission, and providing a positively updated score.
We will make sure to include all rebuttal updates in the final version of the manuscript. | Summary: Paper presents, END, a diffusion models for 3D molecule generation that
- is equivariant to euclidean transformations and
- includes a learnable forward process.
Specifically, the forward process in the presented models, is defined as a learnable transformation, dependent on both time and data such that the resulting latent representation $z_t$ transforms covariantly with the injected noise. This is the main difference between the forward pass of Neural Flow Diffusion Models and the proposed method.
Strengths: - Can be used for both conditional and conditional molecules generation.
- Improves on existing equivariant diffusion models.
- Experimentally, the proposed method shows improvements on both conditional and unconditional generation on the QM9 and GEOM-DRUGS datasets.
Weaknesses: - Lack of a thorough ablation of the proposed method.
Technical Quality: 3
Clarity: 3
Questions for Authors: - The only form of ablation i see for the proposed model is in Table 1 where two versions of END are provided. However even here, it seems the END with $\mu_\phi$ performs similar or sometimes better (according to the presented metrics) than the full END model. Is the same pattern observed for the conditional generation tasks?
- Relating to the first question above, can a much more thorough ablation of the proposed method be provided in both testing scenarios to really ascertain the utility of the components in the proposed method?
- Can the training times for the benchmarked methods be provided for comparison? While it is stated passingly that END requires more training time, can this be cast in contrast with the baselines by providing the actual numbers?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Limitations are adequately discussed by the authors.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their constructive feedback. We are happy that our work was positively received by the reviewer.
We address the reviewer’s concerns/questions here below:
**Ablation**
The key component in our method is the learnable forward process. Hence, the logical ablation is whether to include a learnable forward (=END), or not (=EDM).
As stated in our global rebuttal, we made sure that all baselines shared the same amount of parameters as the models featuring a learnable forward to ensure that the difference in performance did not come from an increase in parameters. We did so by having 10 rounds of message-passing in EDM, against 5 in the forward and 5 in the reverse for END – all other things equal.
As pointed out by the reviewer, the ablated version where only the mean is learned (the standard deviation of the conditional marginal is pre-specified and derived from the same noise schedule as EDM) is shown to perform on par with the full version. We believe this is due to the relative simplicity of the task. We are currently running the same ablation on the more challenging GEOM-Drugs, and hope to be able to report the results before the end of the discussion period.
As nicely suggested by the reviewer, we conducted the same ablation in the conditional setting (composition-conditioned generation). The corresponding results are provided in the table here below – we will also add it to the final version. The ablated model is presented in the last row.
| Matching [%] (↑) | 50 | 100 | 250 | 500 | 1000 |
|------------------|------|------|------|------|------|
| cEDM | 69.6 | 73.0 | 74.1 | 76.2 | 75.5 |
| cEND | 89.2 | 90.1 | 91.2 | 91.5 | 91.0 |
| cEND (mean only) | 75.7 | 79.9 | 82.7 | 83.0 | 83.5 |
Regarding additional ablations, disabling equivariance in $F_\varphi$ could also be a possible option, previous work \[1\] has however clearly shown that an architecture obeying the relevant symmetries is a useful inductive bias. We would be happy to hear the additional ablations that the reviewer would like to see in the final version of the paper.
**Timing**
In terms of timing, a training step on QM9 (common batch size of 64\) takes on average \~0.37s (END) vs \~0.16s (EDM), this corresponds to a \~2.3x relative increase.
We observe the same trend on GEOM-Drugs (with an effective batch size of 64), a training step takes on average \~0.40s for END vs. \~0.15s for EDM (corresponding to a \~2.7x relative increase).
We do not observe that END requires a larger number of epochs to converge, compared to EDM. All models were trained for the same number of epochs / steps.
With our current implementation, sampling with END requires slightly less than 3x the time required by EDM on average. We report sampling times on QM9 (1024 samples) as an example, for varying numbers of integration steps.
| time \[s\] (↑) | 50 | 100 | 250 | 500 | 1000 |
|--------------|------|-------|-------|-------|--------|
| EDM | 30.4 | 60.6 | 149.7 | 297.7 | 593.7 |
| END | 88.6 | 179.6 | 445.9 | 886.4 | 1765.8 |
However, we want to stress that END can achieve comparable (or better) accuracy with less integration steps. A concrete example of this can be seen in Table 2 in the rebuttal PDF (GEOM-Drugs), where with only 100 steps END yields better samples (connectivity and strain energy) than EDM with 1000 integration steps, i.e. in \~⅓ of the time.
As mentioned in the global rebuttal, we note that alternative parameterizations of the reverse process are possible. In particular, the drift of the reverse process $\\hat{f}\_{\\theta, \\varphi}$ could be learned without direct dependence on $f\_{\\varphi}$, thereby leading to very limited overhead with respect to vanilla diffusion models for sampling.
\[1\] Equivariant Diffusion for Molecule Generation in 3D. ICML 2022\.
---
Rebuttal Comment 1.1:
Title: Follow up ablation on GEOM-Drugs
Comment: We want to thank again the reviewer for suggesting us to run additional ablations.
As promised in our initial rebuttal, we provide here additional results on the more challenging GEOM-Drugs -- i.e. an ablated model where only the mean is learned (the standard deviation of the conditional marginal is pre-specified and derived from the same noise schedule as EDM).
The results are collected in the last row of the table herebelow, and presented with the initial results for better readability.
Similarly to the conditional setting, learning only the mean provides a clear improvement compared to the baseline, but is shown to perform slightly worse to the full model across all metrics except validity.
| | | At. Stab. [\%] | V [\%] | V$\times$C [\%] | TV$_A$ [$10^{-2}$] |
|-----------------|-------|-----------------------------|-----------------------------|-----------------------------|-----------------------------|
| Model | Steps | | | | |
| EDM | 50 | $84.7_{\scriptstyle \pm.0}$ | $93.6_{\scriptstyle \pm.2}$ | $46.6_{\scriptstyle \pm.3}$ | $10.5_{\scriptstyle \pm.1}$ |
| | 100 | $85.2_{\scriptstyle \pm.1}$ | $93.8_{\scriptstyle \pm.3}$ | $56.2_{\scriptstyle \pm.4}$ | $8.0_{\scriptstyle \pm.1}$ |
| | 250 | $85.4_{\scriptstyle \pm.0}$ | $94.2_{\scriptstyle \pm.1}$ | $61.4_{\scriptstyle \pm.6}$ | $6.7_{\scriptstyle \pm.1}$ |
| | 500 | $85.4_{\scriptstyle \pm.0}$ | $94.3_{\scriptstyle \pm.2}$ | $63.4_{\scriptstyle \pm.1}$ | $6.4_{\scriptstyle \pm.1}$ |
| | 1000 | $85.3_{\scriptstyle \pm.1}$ | $94.4_{\scriptstyle \pm.1}$ | $64.2_{\scriptstyle \pm.6}$ | $6.2_{\scriptstyle \pm.0}$ |
| END | 50 | $87.1_{\scriptstyle \pm.1}$ | $84.6_{\scriptstyle \pm.5}$ | $68.6_{\scriptstyle \pm.4}$ | $5.9_{\scriptstyle \pm.1}$ |
| | 100 | $87.2_{\scriptstyle \pm.1}$ | $87.0_{\scriptstyle \pm.2}$ | $76.7_{\scriptstyle \pm.5}$ | $4.5_{\scriptstyle \pm.1}$ |
| | 250 | $87.1_{\scriptstyle \pm.1}$ | $88.5_{\scriptstyle \pm.2}$ | $80.7_{\scriptstyle \pm.6}$ | $3.5_{\scriptstyle \pm.0}$ |
| | 500 | $87.0_{\scriptstyle \pm.0}$ | $88.8_{\scriptstyle \pm.3}$ | $81.7_{\scriptstyle \pm.4}$ | $3.3_{\scriptstyle \pm.0}$ |
| | 1000 | $87.0_{\scriptstyle \pm.0}$ | $89.2_{\scriptstyle \pm.3}$ | $82.5_{\scriptstyle \pm.3}$ | $3.0_{\scriptstyle \pm.0}$ |
| **END (mean only)** | 50 | $85.6_{\scriptstyle \pm.1}$ | $87.8_{\scriptstyle \pm.2}$ | $66.0_{\scriptstyle \pm.4}$ | $7.9_{\scriptstyle \pm.0}$ |
| | 100 | $85.8_{\scriptstyle \pm.1}$ | $89.9_{\scriptstyle \pm.1}$ | $73.7_{\scriptstyle \pm.4}$ | $6.1_{\scriptstyle \pm.1}$ |
| | 250 | $85.7_{\scriptstyle \pm.1}$ | $91.2_{\scriptstyle \pm.2}$ | $77.4_{\scriptstyle \pm.4}$ | $5.0_{\scriptstyle \pm.1}$ |
| | 500 | $85.8_{\scriptstyle \pm.1}$ | $91.6_{\scriptstyle \pm.1}$ | $78.6_{\scriptstyle \pm.3}$ | $4.8_{\scriptstyle \pm.1}$ |
| | 1000 | $85.8_{\scriptstyle \pm.1}$ | $91.8_{\scriptstyle \pm.1}$ | $79.4_{\scriptstyle \pm.4}$ | $4.6_{\scriptstyle \pm.0}$| | Summary: The Equivariant Neural Diffusion (END) model is a novel approach for molecule generation in 3D that maintains equivariance to Euclidean transformations. Unlike traditional diffusion models that use a pre-specified forward process, END introduces a learnable forward process, parameterized through a time- and data-dependent transformation. This innovation allows the model to adapt better to the underlying data distribution. Experimental results demonstrate that END outperforms strong baselines on standard benchmarks for both unconditional and conditional generation tasks, particularly excelling in generating molecules with specific compositions and substructures. This flexibility in modeling complex molecular structures suggests significant potential for applications in drug discovery and materials design.
Strengths: **Originality:** END introduces a novel learnable forward process, diverging from the fixed processes in traditional diffusion models. This allows the model to better adapt to the underlying data distribution, especially in complex 3D molecular structures. It creatively combines elements from Neural Function Matching Diffusion Models (NFDM) and Equivariant Diffusion Models (EDM), enhancing the generative process while maintaining E(3) equivariance.
**Quality:** The model demonstrates superior performance in generating 3D molecular structures, outperforming strong baselines in both unconditional and conditional settings. It excels in generating stable, valid, and unique molecules, particularly evident in its results on the GEOM-DRUGS dataset. Comprehensive experiments and ablation studies confirm the robustness and reliability of the model.
**Clarity:** The paper provides a detailed and clear exposition of the methodology, including the formulation of the learnable forward process, parameterization, and evaluation metrics. The inclusion of algorithmic steps and extensive experimental details facilitates replicability for researchers.
**Significance:** END represents a substantial advancement in generative modeling for 3D molecules, addressing limitations of prior models by improving sample quality and generation speed. Its ability to maintain equivariance while achieving high performance has significant implications for applications in drug discovery and materials design, potentially transforming these fields.
Weaknesses: **Performance Consistency:** The performance of END is not consistently superior to existing baselines. Although it shows competitive results, there are instances where traditional models, like EDM and its variants, outperform END, particularly in metrics such as validity and uniqueness across different datasets.
**Complexity and Scalability:** The added complexity of a learnable forward process, while innovative, increases the model’s training time and resource requirements. END requires more computational resources and longer training periods compared to simpler, fixed-process models like EDM . The model operates on fully-connected graphs, which limits its scalability to larger datasets and more complex molecular structures. This constraint can hinder its applicability in more demanding real-world scenarios.
Technical Quality: 3
Clarity: 3
Questions for Authors: - Why was GeoDiff not included as a baseline in your comparisons?
- Could you provide more detailed ablation studies that isolate the contributions of the learnable forward process and other key components of END?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: **Computational Complexity:** The learnable forward process in the END model increases computational complexity, resulting in longer training times and higher resource usage. The authors should discuss potential optimizations to reduce this overhead, such as more efficient algorithms or hybrid approaches. Comparing training times and resource requirements with simpler models would provide useful insights into the model's efficiency.
**Scalability Issues:** END's architecture limits its scalability to larger datasets and more complex molecular structures like proteins. To enhance scalability, the authors should explore strategies like sparse representations or hierarchical approaches. These methods could enable the model to handle larger and more complex datasets effectively.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their constructive feedback. We are happy that our work was positively received by the reviewer.
We address the reviewer’s concerns/questions here below.
**Performance Consistency**
Regarding validity, we first note that (1) cheminformatics software implicitly adds hydrogens if an atom has a valence smaller than expected, (2) validity is usually computed on the largest connected fragment (thereby ignoring the remaining atoms).
On QM9, mol. stability gives a better picture, as it ensures that each atom exactly has the desired variance. Models including a learnable forward pass are shown to perform significantly better than baselines without. Improvements are also observed across other metrics (Table 1 in submission).
On the more challenging GEOM-Drugs, we believe that evaluating the model in terms of (validity x connectivity) is more realistic. According to that metric, END is clearly better than the baseline and samples far more connected molecules (Table 2 in rebuttal PDF).
**GeoDiff**
To the best of our knowledge, GeoDiff \[1\] is a diffusion model for conformer generation, i.e. generation of coordinates given a molecular graph. Perhaps the reviewer had another model in mind?
**Ablation**
The key component in our method is the learnable forward process. Hence, the logical ablation is whether to include a learnable forward (=END), or not (=EDM).
To ensure that the difference in performance does not come from an increase in parameters, we made sure that all EDM baselines shared the same amount of parameters as the models featuring a learnable forward. We did so by having 10 rounds of message-passing in EDM, against 5 in the forward and 5 in the reverse for END – all other things equal.
We additionally provided another ablated version of END on QM9, where only the mean is learned, whereas the standard deviation of the conditional marginal is pre-specified and derived from the same noise schedule as EDM.
As suggested by another reviewer **\[jNQ2\]**, we conducted the same experiment in the conditional setting, where fixing the standard deviation is shown to lead to a small decrease in performance.
We are currently running the same ablation on GEOM-Drugs, and hope to be able to report the results before the end of the discussion period.
**Complexity and Scalability**
**Complexity:**
In terms of training, we did not observe that END required more epochs than the baseline to reach convergence. However, it is correct that each training step takes longer, and that the total training time is increased. All other things equal, we evaluated that increase to \~2.5x relative to EDM. In terms of resources, END could be trained on a single GPU for both datasets.
Regarding sampling, our current implementation incurs a \~3x increase relative to EDM.
However, we emphasize that END can be competitive with fewer integration steps. As a concrete example, on GEOM-Drugs, END with only 100 steps yields better samples than EDM with 1000 integration steps, i.e. in \~⅓ of the time.
Finally, alternative efficient parameterizations of the reverse process are possible. In particular, the drift of the reverse process $\\hat{f}\_{\\theta, \\varphi}(\\boldsymbol{z}\_t, t)$ could be learned without direct dependence on $f\_{\\varphi}$, thereby leading to a very limited overhead with respect to vanilla diffusion models for sampling.
**Scalability:** As all concurrent approaches, scaling to large systems is currently limited by the full connectivity of the message passing scheme.
An element specific to END, is that, due to the learnable forward, the prior is no longer forced to be $\\mathcal{N}(0, I)$ as in conventional diffusion models and can instead e.g. become size-specific. For large molecules, one could imagine scaling the prior with the number of atoms present in the system, i.e. $\\mathcal{N}(0, sI)$ where $s$ is a function of the number of atoms. Combined with a distance-based cutoff function to determine the neighborhood, this would allow for a more graceful scaling – as early steps of the denoising process would not result in fully-connected message passing as it is effectively the case with a simple $\\mathcal{N}(0, I)$ prior.
Another possibility to scale to systems such as large coordination complexes, is to design priors that “encode shapes”, e.g. square planar geometry could be built as a center and 4 groups of atoms representing the coordinated ligands, and thereby enabling more local message passing schemes.
\[1\] GeoDiff: a Geometric Diffusion Model for Molecular Conformation Generation, ICLR 2022 | Summary: This paper proposes an extension of diffusion models dubbed Equivariant Neural Diffusion, which leverages a learnable forward diffusion process to enhance flexibility. The entire framework has been constructed such that physical symmetry, i.e., equivariance/invariance, of the density is preserved. Experiments are performed on QM9 and DRUGS datasets in the task of molecule generation from scratch as well as controllable generation.
Strengths: 1. The presentation is mostly clear and the method is easy to follow.
2. The method has been demonstrated to perform favorably in the controllable generation setting, even when the number of sampling steps is limited to, e.g., 50.
Weaknesses: 1. The proposed approach seems to be an incremental combination of geometric diffusion models and neural flow diffusion models. The way of combining these two flavors incurs limited technical novelty. The core design mostly lies in constructing $F\_\varphi$ which is equivariant.
2. The performance on QM9 and DRUGS is a bit marginal compared with the selected baselines.
3. Missing important baselines, e.g., GeoBFN [1], which has shown strong performance on the same task, i.e., molecule generation. Moreover, GeoBFN can also achieve high performance with very few sampling steps, even only 20.
[1] Song et al. Unified generative modeling of 3d molecules with bayesian flow networks. In ICLR'24.
Technical Quality: 2
Clarity: 2
Questions for Authors: Q1. How does the method perform compared with GeoBFN, especially with different number of sampling steps?
Q2. Could the method be applied to systems with larger scales, e.g., proteins?
Confidence: 4
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: The authors have discussed the limitations, including scaling, limited application scope, etc.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their constructive feedback. We address the reviewer’s concerns/questions here below:
**Limited technical novelty**
As mentioned in our general rebuttal, END is indeed a combination of existing ideas. We however want to emphasize that (1) our work is the first to seek improvement by adding a learnable forward process, orthogonally to most previous work that has mostly focused on the reverse process, (2) designing an appropriate transformation was not straightforward, it required some care as to preserve the desired invariance of the learned distribution.
**Marginal performance gain**
While we agree with the reviewer that the performance gains are limited on QM9, we have shown that END works considerably better than the considered baselines on GEOM-Drugs -- as summarized in the general rebuttal. Most notably, END significantly improves connectivity and geometry quality. It also shows better agreement with the training data on (newly added) drug-related metrics (SAScore, QED and logP).
**Missing baseline GeoBFN**
Unfortunately, since GeoBFN was published on arXiv \~2 months prior to the submission deadline a thorough comparison to this concurrent work is unfeasible. However, we extracted the relevant numbers presented in that paper and a direct comparison with GeoBFN in Tables 1 and 2 will be part of the final version. We summarize the comparison below.
On QM9, END generally performs on par with GeoBFN on the stability / validity metrics, across sampling steps. We could not evaluate the additional metrics as we do not have access to samples / pretrained models for GeoBFN.
On the more realistic GEOM-Drugs, END is shown to outperform GeoBFN in terms of atom stability (first row in the table below) – especially for small numbers of steps. As we can not check whether GeoBFN generates disconnected fragments, solely based on validity GeoBFN seems to perform better (second row in the table below).
| | 50 | 100 | 500 | 1000 |
|--------|------|------|------|------|
| END | 87.1 | 87.2 | 87.0 | 87.0 |
| GeoBFN | 75.1 | 78.9 | 81.4 | 85.6 |
| | | | | |
| END | 84.6 | 87.0 | 88.5 | 88.8 |
| GeoBFN | 91.7 | 93.1 | 93.5 | 92.1 |
It would be interesting to evaluate the connectivity / strain energy of the samples produced by GeoBFN for a full comparison.
**Application to large systems**
There is no restriction to the systems the END could be applied to.
As all concurrent approaches, scaling to large systems is currently limited by the full connectivity of the message passing scheme.
As for other methods treating categorical features as a continuous relaxation, we also note that for systems with a large number of atom types, alternatives to one-hot encoding (linear scaling wrt cardinality of space) might be beneficial. Methods such as Analog Bits \[1\] (logarithmic scaling), or continuous lower-dimensional embedding such as that of GeoLDM \[2\] are interesting approaches to deal with discrete features within END.
\[1\] Analog Bits: Generating Discrete Data using Diffusion Models with Self-Conditioning, ICLR 2023\.
\[2\] Geometric Latent Diffusion Models for 3D Molecule Generation, ICML 2023
---
Rebuttal Comment 1.1:
Title: Follow up
Comment: Again, we want to thank the reviewer for providing constructive feedback.
We believe that we have now addressed the weaknesses raised in the initial review.
We are happy to clarify any issue that should remain.
---
Rebuttal 2:
Comment: Thank you for the response. However I do believe the paper would be in better shape by additional efforts in some modifications e.g., adding comparison with advanced baselines (e.g., GeoBFN). I understand that GeoBFN was published ~2months prior to the deadline but the whole experiment would just take around several days and a fair comparison is expected, since you both work on the same dataset and benchmark and claim the advantage of fewer step sampling. Moreover, the work still seems technically incremental. I will slightly increase the score but still hold an opinion that the paper could be further strengthened by either showcasing strong performance against sota methods or exploring novel applications/use cases that constitutes a unique contribution.
---
Rebuttal Comment 2.1:
Title: Follow up
Comment: We thank the reviewer for engaging in the discussion, and helping us improve the paper further.
First, we want to reiterate that a comparison with GeoBFN based on the published results is provided in our rebuttals, and will be included in the final version of the paper — details are provided below for reference.
Second, we want to emphasize that END showcases strong performance against SOTA methods, and specifically against GeoBFN.
On QM9, END demonstrates a level of performance very similar to that of GeoBFN (see below).
On GEOM-Drugs, END clearly outperforms GeoBFN in terms of atom stability (e.g. END’s 50-step atom stability is better than GeoBFN’s 1000-step), while GeoBFN does slightly better in terms of validity. We note that GeoLDM, a very relevant baseline, is shown to lead higher validity than both GeoBFN and END, but that connectivity and geometry quality is subpar compared to END — highlighting that concluding anything from that metric alone is difficult.
Finally, as suggested by the reviewer, we will run and collect additional metrics for GeoBFN, and include them in the final version of the paper. However, due to the unavailability of public checkpoints, we can unfortunately not perform that experiment before the discussion period ends.
**QM9**
| Metrics / Steps | | 50 | 100 | 500 | 1000 |
|-----------------|--------|-----------------|----------------|----------------|-----------------|
| At. Sta. | END | $98.6 \pm 0.0$ | $98.8 \pm 0.0$ | $98.9 \pm 0.0$ | $98.9 \pm 0.0$ |
| | GeoBFN | $98.28\pm 0.1$ | $98.64\pm 0.1$ | $98.78\pm 0.8$ | $99.08\pm 0.06$ |
| | | | | | |
| Mol. Sta. | END | $84.6 \pm 0.1$ | $87.4 \pm 0.2$ | $88.8 \pm 0.4$ | $89.1 \pm 0.1$ |
| | GeoBFN | $85.11 \pm 0.5$ | $87.21\pm 0.3$ | $88.42\pm 0.2$ | $90.87\pm 0.2$ |
| | | | | | |
| V | END | $92.7\pm 0.1$ | $94.1\pm 0.0$ | $94.8\pm 0.2$ | $94.8\pm 0.1$ |
| | GeoBFN | $92.27\pm 0.4$ | $93.03\pm 0.3$ | $93.35\pm 0.2$ | $95.31\pm 0.1$ |
| | | | | | |
| V x U | END | $91.4\pm 0.1$ | $92.3\pm 0.2$ | $92.8\pm 0.2$ | $92.6\pm 0.2$ |
| | GeoBFN | $90.72\pm 0.3$ | $91.53\pm 0.3$ | $91.78\pm 0.2$ | $92.96\pm 0.1$ |
**GEOM-Drugs**
| Metrics / Steps | | 50 | 100 | 500 | 1000 |
|-----------------|--------|----------------|----------------|----------------|----------------|
| At. Sta. | END | $87.1 \pm 0.1$ | $87.2 \pm 0.1$ | $87.0 \pm 0.0$ | $87.0 \pm 0.0$ |
| | GeoBFN | $75.11$ | $78.89$ | $81.39$ | $85.60$ |
| | | | | | |
| V | END | $84.6 \pm 0.5$ | $87.0 \pm 0.2$ | $88.8 \pm 0.3$ | $89.2 \pm 0.3$ |
| | GeoBFN | $91.66$ | $93.05$ | $93.47$ | $92.08$ | | Rebuttal 1:
Rebuttal: We thank the reviewers for their constructive feedback.
We are pleased that the reviewers [**hMME,5dRq,K86T**] found the presentation of END clear, its construction original **[K86T]**, that it is a potentially significant contribution to the field **[K86T],** and that it demonstrates experimental benefits over existing models in unconditional and conditional settings [**hMME,K86T,jNQ2**].
In this global rebuttal, we address the main concerns of the reviewers. We also provide a detailed answer to each reviewer below.
## Performance and metrics
Three reviewers [**hMME,5dRq,K86T**] remarked that we did not demonstrate a clear improvement over competing methods. We respectfully disagree, and to highlight the strong performance improvement of END, we have included several additional metrics. And both the original metrics and the newly included ones show that END strongly outperforms competitors.
### Additional metrics
We added 5 metrics. As specifically suggested by reviewer **5dRq**, they are taken from HierDiff [1]:
* **SAS**: Synthetic accessibility score;
* **QED**: Quantitative estimation of drug-likeness;
* **logP**: Partition coefficient;
* **MW**: Molecular weight.
We also provided a metric measuring the quality of the generated 3D geometries: **Strain Energy**, as the difference in energy when relaxing the generated molecules (using RDKit MMFF)
### QM9
On the simpler QM9 benchmark, in terms of validity, stability, and uniqueness, END reaches a performance level similar to that of current SOTA methods (GeoLDM and GeoBFN) but these metrics are essentially saturated.
More importantly END displays better agreement with the dataset distribution compared with EDM: For example, TV on atom types go from 2.5 (EDM) to 1.2 (END) at 1000 sampling steps (Table 1 in submitted manuscript). Furthermore, on all additional metrics, END shows better agreement with the data distribution (except for QED which is captured perfectly by all methods) as shown in Table 1 in rebuttal PDF. In particular, the reduction in strain energy demonstrates that END yields better geometries than the baselines.
### GEOM-Drugs
On the more challenging and realistic GEOM-Drugs we show **large improvements** (Tables 2 and 3 in rebuttal PDF):
- (i) ~20% improvement over EDM in terms of “Validity x Connectivity”;
- (ii) ~50% improvement on “TV on atom types”;
- (iii) improved 3D geometries as per the significantly lower strain energy;
- (iv) improved SAS, QED and logP.
As suggested by reviewer **5dRq**, we conducted a more extensive comparison with GeoLDM by sampling using the publicly available checkpoint. We summarize here our findings (Table 2 in the rebuttal PDF):
* V x C improves from 45.8% (GeoLDM) to 82.5% (END)
* TV on atom types improves from 10.6 (GeoLDM) to 3.0 (END)
* Strain Energy improves from 133.5 (GeoLDM) to 55.0 (END).
We believe these results demonstrate the strong improvement on the prior SOTA. All metrics are collected in tables in the rebuttal PDF.
## Missing baselines / Previous work
**hMME** requested a comparison to the very recent GeoBFN [2] (posted on arXiv ~2 months prior to the submission). While already part of the literature review, we will add the comparison in Tables 1 and 2 in the final version.
**5dRq** requested a comparison to two previous works [3, 4]. While already included in the submission as baseline/related work, we will additionally provide a paragraph discussing the key differences in the final version (see answer to **5dRq** for summary).
## Ablations
**K86T** and **jNQ2** mentioned that more detailed ablation studies could make the paper even stronger.
First, we want to stress that we made sure that all baselines shared the same amount of parameters as the models featuring a learnable forward, as to ensure that the difference does not simply come from an increase in parameters.
Following reviewer **jNQ2**'s suggestion, we ran an additional ablation study, and added the results in our rebuttal to **jNQ2**. We are currently running an ablated version of END (mean-only) on GEOM-Drugs – we hope to be able to report the results during the discussion period.
## Timing
Two reviewers [**K86T, jNQ2**] requested more information about the increased training/sampling time.
In summary, other things equal, END leads to ~2.5x increase relative to EDM per training step and requires the same number of epochs to converge.
Regarding sampling, our current implementation leads to ~3x increase relative to EDM per function evaluation. However, END usually requires much fewer number of function evaluations to achieve comparable (or better) accuracy and we note that alternative parameterizations of the reverse process are possible. In particular, the drift of the reverse process $\\hat{f}\_{\\theta, \\varphi}$ could be learned without direct dependence on $f\_{\\varphi}$, thereby leading to very limited overhead with respect to vanilla diffusion models for sampling. We will add these important considerations to the final version of the manuscript.
## Technical novelty
Some reviewers [**5dRq, hMME**] pointed out some limited novelty.
END is indeed a combination of existing ideas. We however want to emphasize that (1) our work is the first molecule generation diffusion model to seek improvement by adding a learnable forward process, orthogonally to most previous work that has mostly focused on the reverse process, (2) designing an appropriate transformation was not straightforward, it required some care as to preserve the desired invariance of the learned distribution.
[1] Coarse-to-Fine: a Hierarchical Diffusion Model for Molecule Generation in 3D. ICML 2023.
[2] Song et al. Unified generative modeling of 3d molecules with bayesian flow networks. In ICLR'24.
[3] Diffusion-based Molecule Generation with Informative Prior Bridges. NeurIPS 2022.
[4] Equivariant Energy-Guided SDE for Inverse Molecular Design. ICLR 2023.
Pdf: /pdf/9c4e9f468810d42b0688bf2471e51c008848f658.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Asynchronous Multi-Agent Reinforcement Learning with General Function Approximation | Reject | Summary: In this paper, the authors study multi-agent reinforcement learning where agents cooperate through asynchronous communications with a central server to learn a shared environment. They consider the following two settings: multi-agent contextual bandits with general function approximation, and multi-agent RL with general function approximation. For both settings, they propose provably efficient algorithms with low regret and low communication complexity.
Strengths: 1. The problem of asynchronous MARL with general function approximation is interesting and important.
2. This paper is the first to consider the setting with general function approximation. The results are solid and the proof looks good to me.
3. For both settings, the authors propose provably efficient algorithms. The results generalize previous results under the linear setting.
Weaknesses: 1. It seems that part of the techniques is from previous results, such as the bonus function oracle. It will be helpful if there is a section discussing technical novelty.
2. It seems that the setting is closely related to low switching RL and RL with delayed feedback. It will be interesting if the authors could briefly discuss about the connections.
3. For the communication complexity bound in theorem 5.1, should it be $/\alpha$ instead of $\alpha$? In addition, why not choose $\alpha=1/M$ in both theorems? In this way, the communication cost can be improved. (Please correct me if I misunderstood anything)
4. Line ?? in line 214 of page 6. Please correct it.
Technical Quality: 3
Clarity: 3
Questions for Authors: Please refer to the weaknesses above.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the numerous feedback and suggestions to improve our paper. We have corrected the relevant typos and small mistakes, and address the reviewer’s major concerns below:
---
**Q1.** It seems that part of the techniques is from previous results, such as the bonus function oracle. It will be helpful if there is a section discussing technical novelty.
**A1.** We highlight some of our main technical novelties in the following. We mentioned the first two of these in the contributions section of our paper (Lines 56-62), while we omitted discussion of the third one due to the intricate technicalities involved and limited space.
- First, our design of the communication content (download data) is quite different from previous works. For multi-agent communication in a linear environment, one may utilize the closed-form solution for least squares regression and transmit the covariance matrix. In contrast, there is no such explicit solution for general function classes, and we designed a data transmission policy consisting of decision functions and bonus functions. This protocol avoids the transmission of the entire dataset, which could result in data leakage. It also does not require the transmission of confidence function sets, which is unrealistic in a nonlinear setting.
- Another key novelty is our communication criterion. The communication criterion in linear settings are formulated by the degree of increment in covariance matrix determinant. Adapting the criterion directly to general function approximation settings is very challenging due to the lack of a closed-form solution to linear regression and a counterpart to covariance matrices.
Our criterion is based on uncertainty estimators, and ensures that our algorithms achieve a regret independent of the number of agents, while maintaining logarithmic communication cost in terms of $T$ and $K$. Intuitively, a communication round is triggered when the local agent collects a substantial amount of new data compared to old data, which implies that the amount of collected data for each communication round grows exponentially.
To the best of our knowledge, the communication criterion we proposed is the first effective criterion for multi-agent nonlinear RL, achieving a regret independent of the number of agents and a communication cost logarithmically dependent on the number of episodes.
- One specific technical difficulty is that, when bounding the communication cost, we need to bound the summation of bonus functions across all local data with respect to the previous server update $Z^{\text{ser}}$ (see eq(8)). In contrast, our Lemma 6.2 provides an upper bound for the summation of uncertainty with respect to global data history $Z^{\text{all}}$. The discrepancy between server data $Z^{\text{ser}}$ and global data $Z^{\text{all}}$ presents a unique challenge, and necessitates a meticulous analysis of uncertainty estimators, wherein we have to navigate between these different datasets using the epoch segmentation scheme (see Sections A.2 & B.2).
---
**Q2.** It seems that the setting is closely related to low switching RL and RL with delayed feedback. It will be interesting if the authors could briefly discuss about the connections.
**A2.** Thank you for this great suggestion! The design of our communication criterion is indeed partially inspired by the rare-switching strategy in single-agent RL settings, for example eq(3.1) in [1]. Both criteria are used effectively to control the amount of policy updates the agent(s) go through, yet the different problem settings of single-agent versus multi-agent lead to different goals for these criteria. We revised the “Switch Condition Based On Uncertainty Estimators” paragraph (Lines 230-233) to further discuss this:
*This criterion has a similar functionality as the determinant-based criterion in linear settings. It should also be noted that rare switching conditions in single-agent RL with general function approximation have a similar form [1], yet those conditions are used for balancing exploration and exploitation, while our communication criteria are used for balancing regret and communication cost. Specifically, parameter $\alpha$ controls communication frequency: ...*
Apart from this, as we mentioned in the “Switch Condition Based On Bonus Functions” paragraph (Lines 257-262), it is not appropriate to simply use the switch condition used in single-agent settings, as this would grant the local agent access to global datasets, including data collected by other agents. Thus we reformulated the switch condition with bonus functions instead of uncertainty estimators.
On the other hand, we do not see a direct link between our work and RL with delayed feedback. One may consider adapting a similar approach to rare switching to deal with delayed rewards, but our communication criterion is not designed with delayed feedback in mind.
---
**Q3.** For the communication complexity bound in theorem 5.1, should it be $/ \alpha$ instead of $\alpha$? In addition, why not choose $\alpha = 1/M$ in both theorems? In this way, the communication cost can be improved.
**A3.** For the communication complexity bound in Theorem 5.1, our original version indeed has a typo and the communication cost should have been $O\big( H (1 + M \alpha)^2 / \alpha \dim_E (\mathcal{F}, \lambda / K) \log^2 (K / \min \{1, \lambda\} ) \big)$. As for the value of $\alpha$, notice that $C(M, \alpha) = \sqrt{1 + M\alpha} \big( \sqrt{1 + M\alpha} + M\sqrt{\alpha} \big)$ defined in Theorem 4.3 requires $\alpha = O(1 / M^2)$ to have a constant value independent of $M$, which is a prerequisite for the regret bound to not depend on $M$. We have modified our paper to reiterate the definition of $C(M, \alpha)$ in Theorem 5.1 for clarity.
---
[1] Zhao et al. A nearly optimal and low-switching algorithm for reinforcement learning with general function approximation, 2023.
---
Rebuttal Comment 1.1:
Comment: Thanks for your detailed response. I will keep my score. | Summary: The authors propose two algorithms for asynchronous communication in multi-agent reinforcement learning with generalized value function approximation: Asynchronous-NLin-UCB for context bandit scenarios and Asynchronous-NLSVI-UCB for episodic MDP scenarios. These algorithms achieve near-optimal regret with low communication complexity. The authors theoretically show the trade-off between regret and communication complexity.
Strengths: - The authors provide a detailed background on the related literature concerning regret, communication complexity, and the presence of asynchronous update, which is greatly helpful in understanding the contributions of the proposed algorithms.
- The theoretical foundations and proofs regarding the communication criterion are important and interesting. Also, the trade-off between regret and communication complexity via parameter $\alpha$ offers valuable insights.
- The approach of receiving decisions and bonus functions from a central server instead of historical data is intuitive and appears crucial from a privacy perspective.
Weaknesses: While I have studied general value function approximation, I do not have research background in this field for multi-agent scenarios. Therefore, my critique may not have captured the weaknesses of this paper.
I’m open to revising my score based on the authors' responses.
As far as I know, MARL often adopts the Centralized Training Decentralized Execution(CTDE) framework to avoid the action space growing exponentially with the number of users. However, it is unclear whether the proposed scenario follows "decentralized execution". Agents are supposed to execute based on partial observations in a decentralized manner, but the proposed approach appears to involve a central server consistently during execution. If the proposed scenario is inconsistent with CTDE, I would be interested to hear from the authors what the distinct advantages or necessity of this scenario is.
Typos:
- Line 214: Reference to the label is not correctly written.
- Theorems 4.3 and 5.1: $\tilde{\beta}$ is not properly defined in the statement, and $\beta_t$ should be fixed to $\tilde{\beta}$.
- Theorem 5.1: Total communication complexity should be fixed to $O((1+M\alpha)^2 / \alpha)$.
Technical Quality: 3
Clarity: 3
Questions for Authors: A simple question: Why is the order of the Eluder dimension of regret reported as $O(\sqrt{\text{dim}_E})$ instead of $O(\text{dim}_E)$? The regret in Line 677 shows similar results to numerous other papers, but is written in $\sqrt{O(\text{dim}_E)}$ despite the fact that $O(\text{dim}_E)$ dominates. For the linear MDP case, [1] presents a lower bound of $\Omega(dH\sqrt{T})$, and considering $\text{dim}_E = \tilde{O}(d)$, the reported results seem lower than the known lower bound. Many papers report similar results as the authors, but am I missing something?
[1] Zhou Dongruo, Quanquan Gu, and Csaba Szepesvari. "Nearly minimax optimal reinforcement learning for linear mixture Markov decision processes." Conference on Learning Theory. PMLR 2021.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The requirement for global state instead of partial observation may limit the practical applicability of the proposed methods.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the detailed review and thoughtful questions. We have fixed all typos mentioned in the review, and address the reviewer’s notable concerns as follows:
---
**Q1.** Does the proposed scenario adapt the Centralized Training Decentralized Execution (CTDE) framework in MARL to avoid the action space growing exponentially with the number of users? If not, what are the distinct advantages or necessities of this scenario?
**A1.** Our setting of MARL is different from the CTDE framework. Specifically, in CTDE, agents share the same environment with a joint state, and perform separate actions to maximize a shared reward, while in our setting, the agents operate within their own environments, with separate states, actions and rewards, but try to learn a shared policy by data sharing and collaboration, which is more close to federated reinforcement learning. In this scenario, agents benefit from sharing their learning experiences, which accelerates the learning process. However, designing an efficient communication criterion between different agents becomes a key challenge. We will emphasize this difference in the revision.
---
**Q2.** Why is the order of the Eluder dimension of regret reported as $O(\sqrt {\dim_E})$ instead of $O(\dim_E)$, despite the fact that $O(\dim_E)$ dominates? For the linear MDP case, [1] presents a lower bound of $\Omega(dH \sqrt{T})$, and considering $\dim_E \sim \tilde{O} (d)$, the reported results seem lower than the known lower bound.
**A2.** We address the concern regarding the reported results’s dependency on $\dim_E$ in the following two bullet points:
- Discussion about the abbreviated regret bound: The regret bound in Theorem 5.1 has the following format $\tilde{O} (H^2 \sqrt{K \dim_E (\mathcal{F}) \log N (\mathcal{F})} + H^2 \dim_E(\mathcal{F}))$. Despite $\dim_E (\mathcal{F})$ dominating the second term compared to $\sqrt {\dim_E (\mathcal{F})}$ in the first term, the most important term in reinforcement learning is often the one where $K$ dominates. When there is limited space, one typically ignores all other terms and only reports a single term $\tilde{O} (H^2 \sqrt{K \dim_E (\mathcal{F}) \log N (\mathcal{F})}$, making it easier to read and compare to other works.
- Discussion about the lower bound: Compared to previous works such as [2] mentioned by the reviewer, our dependency on dimension $d$ when reduced to the linear setting is also $O(d)$. Apart from $\sqrt{\dim_E (\mathcal{F})}$ contributing a factor of $\sqrt{d}$, $\sqrt{\log N (\mathcal{F})}$ also contributes a factor of $\sqrt{d}$, since the covering number of the linear function space $\mathcal{F}$ is typically exponential in terms of dimension $d$.
---
[1] Zhou et al. Nearly minimax optimal reinforcement learning for linear mixture Markov decision processes. Proceedings of Thirty Fourth Conference on Learning Theory, 2021
[2] Zhang et al. Multi-Agent Reinforcement Learning: A Selective Overview of Theories and Algorithms, pages 321–384. Springer International Publishing, Cham, 2021. ISBN 978-3-030-60990-0. doi: 10.1007/978-3-030-60990-0_12
---
Rebuttal 2:
Comment: At this point, the author's answers seem to have resolved many of the questions I initially had, and I have a better understanding of their contribution to this field. However, I am still concerned that the paper is simply an extension of linear MDPs to general value function approximation.
Can the authors explain what new problems they encountered and solved in this extension?
---
Rebuttal Comment 2.1:
Title: Response to Comment
Comment: Thank you for the response! We’re glad to hear that we have resolved many of your questions. Regarding the contribution of our work compared with multi-agent RL in the linear MDP setting, we outline some of the main challenges and technical innovations below. The first two challenges are mentioned in the contributions section of our paper (Lines 56-62), but we did not discuss the third one due to its complex technical nature and limited space.
- First, our approach to designing the **communication content** (download data) differs significantly from previous work in linear MDPs. For multi-agent communication in a linear environment, one can utilize the closed-form solution for least squares regression and transmit the covariance matrix. However, for general function classes, no explicit solution exists. Thus, we developed a data transmission policy consisting of decision functions and bonus functions. This protocol avoids transmitting the entire dataset, which could result in data leakage, and does not require the sending confidence function sets, which is impractical in a nonlinear context.
- Another major innovation is our **communication criterion**. The communication criterion in linear MDPs is based on the degree of increment in covariance matrix determinant. Adapting the criterion directly to general function approximation settings is challenging due to the absence of a closed-form solution to linear regression and a counterpart to covariance matrices.
Our criterion relies on uncertainty estimators. Intuitively, a communication round is triggered when the local agent collects a substantial amount of new data compared to old data, implying that the data collected for each communication round grows exponentially, and thus communication frequency decays exponentially as well. To the best of our knowledge, the communication criterion we proposed is the first effective criterion for multi-agent nonlinear RL, achieving a regret logarithmically dependent on the number of agents while also maintaining logarithmic communication cost with respect to $T$ and $K$.
- One particular technical difficulty arises when bounding the communication cost, as we need to bound the summation of bonus functions across all local data concerning the previous server update $Z^{\text{ser}}$ (see eq(8)). In particular, our Lemma 6.2 provides an upper bound for the summation of uncertainty concerning global data history $Z^{\text{all}}$. The **discrepancy between server data $Z^{\text{ser}}$ and global data $Z^{\text{all}}$** presents a unique challenge, requiring careful analysis of uncertainty estimators. This involves navigating between these different datasets using the epoch segmentation scheme (see Appendix A.2 & B.2). This issue also arises in multi-agent linear MDP settings, but it is much simpler to address when uncertainties are measured by matrices. | Summary: This paper studies the asynchronous multi-agent bandit and RL problem with general function approximation (measured by Elude dimension). The main contribution is to establish $\tilde{O}(\sqrt{\text{dim} T})$ regret bound with $\tilde{O}(M^2 \text{dim})$ communication complexity.
Strengths: This paper is well written and the contribution is solid.
Weaknesses: I think the major concern is non-optimal complexity bounds. Although it seems unreasonable to ask for a matching upper\&lower regret bound for the contextual bandit problem, the part about RL could be possibly improved (at least, the dependence on $H$ is not tight). Also I am curious that what is the current best lower bound for the communication cost to reach an $\sqrt{T}$ regret bound. It would be an interesting problem to study the exact trade-off between the communication cost and regret.
A minor concern might be about the technical novelty given previous methods on measuring the uncertainty.
Technical Quality: 3
Clarity: 3
Questions for Authors: What is the meaning of "fully asynchronous" in table 1?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the affirmative review, and address their concerns in the following:
---
**Q1.** The dependency of regret upper bound on $H$ is non-optimal. Also, what is the current best lower bound for the communication cost to reach an $T$ regret bound?
**A1.** Regarding the optimality of our bounds, we first list and comment on some important past works as reference:
- For the communication complexity, [1] studied the **linear multi-agent MDPs** and propose a lower bound in Theorem 5.5, which we paraphrase below:
*Given a communication complexity of less than $O(dM)$, the expected regret of an algorithm is at least $\Omega (H \sqrt{dMK})$.*
Since **linear multi-agent MDPs** is included in our general function class with eluder dimension d, it directly implies a $O(dM)$ lower bound in our setting.
Compared to our result, this communication complexity corresponds to the case where $\alpha = \Omega(1 / M)$, under which a regret of $O(H^2 \sqrt{dMK})$ is yielded. This is indeed optimal in all parameters except for the number of levels $H$, which we address further below.
- The current optimal regret guarantee for learning **single-agent nonlinear MDPs** is presented in Theorem 4.1 of [2], which achieves a regret upper bound of $\sqrt{HK \dim_{E} \log N}$ for large values of $K$. In comparison, our result is optimal in all parameters except for $H$. The **variance estimator** technique used to remove the extra dependency of $H$ could be also applied to our work and potentially enhance our results. However, since our primary focus is on proposing an algorithm for the multi-agent setting, we chose not to incorporate this technique and suggest it as a promising direction for future research.
---
**Q2.** A minor concern is the technical novelty given previous methods on measuring the uncertainty.
**A2.** We list some of our main technical novelties in the following, the first two of which we mentioned in the contributions section of our paper (Lines 56-62):
- First, our design of the communication content (download data) is quite different from previous works. For multi-agent communication in a linear environment, one may utilize the closed-form solution for least squares regression and transmit the covariance matrix. In contrast, there is no such explicit solution for general function classes, and we designed a data transmission policy consisting of decision functions and bonus functions. This protocol avoids the transmission of the entire dataset, which could result in data leakage. It also does not require the transmission of confidence function sets, which is unrealistic in a nonlinear setting.
- Another key novelty is our communication criterion. The communication criterion in linear settings are formulated by the degree of increment in covariance matrix determinant. Adapting the criterion directly to general function approximation settings is very challenging due to the lack of a closed-form solution to linear regression and a counterpart to covariance matrices.
Our criterion is based on uncertainty estimators, and ensures that our algorithms achieve a regret independent of the number of agents, while maintaining logarithmic communication cost in terms of $T$ and $K$. Intuitively, a communication round is triggered when the local agent collects a substantial amount of new data compared to old data, which implies that the amount of collected data for each communication round grows exponentially.
To the best of our knowledge, the communication criterion we proposed is the first effective criterion for multi-agent nonlinear RL, achieving a regret independent of the number of agents and a communication cost logarithmically dependent on the number of episodes.
- One specific technical difficulty is that, when bounding the communication cost, we need to bound the summation of bonus functions across all local data with respect to the previous server update $Z^{\text{ser}}$ (see eq(8)). In contrast, our Lemma 6.2 provides an upper bound for the summation of uncertainty with respect to global data history $Z^{\text{all}}$. The discrepancy between server data $Z^{\text{ser}}$ and global data $Z^{\text{all}}$ presents a unique challenge, and necessitates a meticulous analysis of uncertainty estimators, wherein we have to navigate between these different datasets using the epoch segmentation scheme (see Sections A.2 & B.2).
---
**Q3.** What is the meaning of "fully asynchronous" in table 1?
**A3.** We mainly use the phrase “fully asynchronous” to contrast the setting in [3], where at each round an agent is **chosen to participate** based on a fixed distribution over all agents, and after each communication round the policy update is **sent to all agents**. While this achieves asynchronicity to a certain degree, it does not fully reflect reality where the participation of agents can be arbitrary and completely order-less. The settings considered in the papers we marked as “fully asynchronous”, on the other hand, allow agents to individually decide when to activate and when to send their history data to the server and request policy updates. In the revision, we clarify this by modifying Lines 83-84 of our paper to the following:
*He et al. [2022] improved communication to be fully asynchronous, where each agent individually and independently interacts with the environment, and proposes the algorithm FedLinUCB with near-optimal regret...*
We also added the same clarification under the table.
---
[1] Min et al. Cooperative Multi-Agent Reinforcement Learning: Asynchronous Communication and Linear Function Approximation.
[2] Zhao et al. A nearly optimal and low-switching algorithm for reinforcement learning with general function approximation, 2023.
[3] Li and Wang. Asynchronous upper confidence bound algorithms for federated linear bandits.
---
Rebuttal Comment 1.1:
Comment: Thanks for the response. I will adjust my score after discussion with AC and other reviewers. | Summary: This paper studied the distributed federated contextual bandit and federated reinforcement learning (FRL) in the presence of a trusted server. In both problems, nonlinearity and asynchronous communications are explored. Similar algorithms for contextual bandit and FRL that encourage exploration via bonus functions are proposed. Finite-time convergence results in terms of regrets are established for both algorithms and communication complexities are also characterized.
Strengths: * This paper studied the asynchronous federated learning problem where only one agent is activated to sample data and infrequently communicate with server.
* The trigger-based communication is an interesting approach in multi-agent or multi-learner problems.
Weaknesses: * The clarity of some of the important quantities are not well defined or explained. For example,
1) In the sample complexity result of Theorem 4.3, $\tilde{\beta}_1$ is used. However, it was not defined. It’s unclear what this notation is referring to. Similarly, in Theorem 5.1, $\tilde{\beta}_2$ is used.
2) The oracle for to compute bonus term bk+1,h is crucial in understanding the algorithms. However, it was not very well-explained or shown anywhere in the main paper.
3) The two sentences from Line 300 to Line 302 are confusing. Please clarify them.
* Typos:
1) An extra closing parenthesis appeared in Line 141.
2) Line 214, ?? -> 12
3) In Line 1 of algorithm 3, $k=[K]$ -> $k\in [K]$.
4) In Line 154, the trajectory should be $(s_h, a_h, \cdots, s_H, a_H)$.
Technical Quality: 2
Clarity: 2
Questions for Authors: * What if there are more than one agents are to be activated instead of just having one agent at each time t?
* How is an agent chosen to be activated in Line 5 of both Algorithm 1 and 2?
* What is the oracle to compute the bonus term?
* What are the terms $\tilde{\beta}_1$ and $\tilde{\beta}_2$?
* Please clarify the two sentences from Line 300 to Line 302.
Confidence: 4
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: Please see weaknesses and questions.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for reading our submission in great detail and pointing out errors and shortcomings. We have fixed all the mentioned typos, and we address other notable concerns in detail below:
---
**Q1.** What are the definitions of $\tilde {\beta}_1$ in Theorem 4.3 and $\tilde {\beta}_2$ in Theorem 5.1?
**A1.** We apologize for not including them in our paper. We have modified the first section of our Theorem 4.3 to the following:
*By taking $\gamma = O (1 / T)$, $\beta_t = \tilde {\beta}_1 = C\_{\beta, 1} \big( \sqrt{\lambda} + R C(M, \alpha) \log (3M N(\mathcal{F}, \gamma) / \delta) \big) $*.
We have also made similar changes to Theorem 5.1.
---
**Q2.** The bonus oracle to compute the bonus terms $b_{k + 1, h}$ are not very well-explained or shown in the paper.
**A2.** We defined our bonus oracle $\mathcal{B}$ in Definition 4.1 of our main paper for both bandit and RL problems, under the multi-agent bandit section. Due to the limited page requirements, we did not redefine the oracle for the MDP case, but instead chose to define the oracle in a more general setting and use the same definition for both cases. To accommodate, we referred the reader back to Definition 4.1 on line 299 when discussing our algorithm for MDPs.
For the validity of assuming such an oracle exists, many previous works have assumed a similar oracle [1, 2], and other works have justified the use of such an oracle by proposing computation methods for bonus functions [3, 4], as we mentioned in Remark 4.2 after Definition 4.1. Since the focus of our paper is tackling the asynchronous multi-agent setting, we saw fit not to include too much detail regarding the oracle, serving as a bonus function approximator.
Nevertheless, we thank the reviewer for bringing this issue to our attention, and we will further revise our final paper to ensure the reader is not confused about our oracle definition.
---
**Q3.** Please clarify the two sentences on Lines 300-302.
**A3.** We have rewritten our RL algorithm to define different oracles $\mathcal{B}^h \_{\mathcal{S} \times \mathcal{A}}$ for each level $h \in [H]$ for clarity. Correspondingly, we have rewritten our paragraph on oracle explanation (Lines 298-302) to the following:
*Similar to the bandits setting, the uncertainty can be approximated with the bonus function acquired from an oracle $\mathcal{B}^h \_{\mathcal{S} \times \mathcal{A}}$, which is introduced in Definition 4.1. We specify the elements in the definition under MDPs as follows: the domain $\mathcal{D} = \mathcal{S} \times \mathcal{A}$, and the data format corresponds to $z = (s, a)$ and $e = (r, s')$. Notice that when we call the oracle on Line 16 of Algorithm 2, the elements $Z_h^{\text{ser}}, \mathcal{F}_h$ and the expected return $b_h$ all operate on the same level $h$, therefore we can assume a different oracle for each level $h \in [H]$, with different corresponding bonus function classes $\mathcal{W}\_{h} = \mathcal{W}\_{h, \mathcal{S} \times \mathcal{A}}$.*
We hope this clarifies our intentions.
---
**Q4.** How is an agent chosen to be activated in Line 5 of both Algorithm 1 and 2? What if there are more than one agents are to be activated instead of just having one agent at each time t?
**A4.** Regarding the questions of the agents’ activation, even though we assume an order of activation for the agents in our work, it is only for the sake of convenience for our theoretical analyses. Realistically, agents individually decide when to activate and when to send its history data to the server and request policy updates, and only the communication order is visible to the server. Under realistic settings, the probability that two agents communicate at the exact same time is extremely low, and even under such events the server can simply process one request before moving on to the next, thus not affecting our overall algorithm.
We modified the paragraph on Lines 204-206 in our main paper to the following for clarity:
***Part I: Local Exploration.** At step $t$ a single agent $m = m_t$ is active (Line 5). They receives a decision set, finds the greedy action according to its decision function $f\_{m, t}$, receives a reward, and updates its local dataset $Z\_{m, t} ^{\text{loc}}$ (Lines 5-7). Note that the specific order of agent activation does not affect our algorithm, as long as the communication order remains the same. This nicely reflects realistic fully asynchronous multi-agent settings, where each agent individually and independently interacts with the environment until communication is triggered by some criterion.*
---
[1] Agarwal et al. VO$Q$L: Towards Optimal Regret in Model-free RL with Nonlinear Function Approximation. In Proceedings of Thirty Sixth Conference on Learning Theory, Jul 2023.
[2] Zhao et al. A nearly optimal and low-switching algorithm for reinforcement learning with general function approximation, 2023.
[3] Kong et al. Online sub-sampling for reinforcement learning with general function approximation, 2023.
[4] Wang et al. Reinforcement learning with general value function approximation: Provably efficient approach via bounded eluder dimension. In Advances in Neural Information Processing Systems, 2020b.
---
Rebuttal Comment 1.1:
Title: Response to rebuttal
Comment: I appreciate the authors' efforts on the response. I have one follow-up question with regard to A4. I am wondering the validity of the following scenario:
In multi-agent setting, in particular when the number of agents is sufficiently large, the probability of two agents communicating with the server having an overlap time seems to be up to the switch condition. If the switch condition is easy to be satisfied, then the above probability is not negligible. In the case that an overlap does happen, an active sampling agent A who just satisfied the switch condition may have to wait for the server to finish processing the request (line 11 and 12) of another active agent B. The processing time may also take significant time depending on the new data size.
Is above scenario possible to occur? Thanks.
---
Reply to Comment 1.1.1:
Title: Response to Follow-up Question
Comment: Thank you for the follow-up question!
We first point out that in theory, we assume data upload, calculation of decision function $\hat{f}$ and bonus function $b$, as well as function download are all instantaneous (Lines 12-20 in Algorithm 2), so the proposed scenario will never occur and hence does not affect our theoretical analysis.
That being said, should we implement our algorithm in practice, we believe this scenario is also very unlikely to happen for the following reasons:
1. As per our communication criterion in equation (8), each agent only communicates with the server once their local sum of uncertainty accumulates to $\alpha$, which means **the frequency of communication decreases exponentially** relative to round $K$. Therefore, when $K$ is large, the communication frequency becomes extremely low, which means the probability of overlap is also very low;
2. In order to avoid communication overhead when $K$ is small, one may consider initializing all local agents with some initial observations before deploying our asynchronous algorithm. This ensures that local agents will not start communicating until their observed data exceed a certain threshold.
Finally, even without the assumption that communication is instantaneous, it is not difficult to modify our theoretical analysis to accommodate for overlapping communication and **delayed feedback**. If we simply ask the agent to **adhere to the old policy** until the updates are received, under proper assumptions the regret term will be incremented by $O(\log K)$ in the worst case. A large body of literature addresses optimization and bandits with delayed feedback, offering analysis techniques we could potentially leverage. However, extending our current methods in this direction is beyond the scope of our current work, therefore we do not consider it in our paper. | null | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
ETO:Efficient Transformer-based Local Feature Matching by Organizing Multiple Homography Hypotheses | Accept (poster) | Summary: This paper proposes an efficient transformer-based local feature matching approach named ETO. ETO consists of three steps: hypothesis estimation, segmentation and refinement. The first and second steps can obtain coarse matching results whose resolution is similar to LoFTR’s coarse results. The third step can provide fine-level matching results. The hypothesis estimation performs the Transformer computation on the 1/32 resolution (rather than the 1/8 resolution in LoFTR), which is the core design that reduces the inference time. Experimental results demonstrate that ETO provides a good balance between accuracy and speed.
Strengths: 1. The motivation of this paper is clear and valuable. Transformer-based approaches are prevalent in the area of feature matching. Nonetheless, it is still an open problem how to design an efficient and accurate transformer-based model.
2. The technical design of ETO is novel. The manner of predicting the local homography parameters on a very coarse resolution (and then refining them) differs from most of the existing coarse-to-fine transformer-based approaches.
3. The accuracy of ETO is acceptable, considering its superiority in inference speed.
Weaknesses: **1. This paper lacks some important technical details.**
1.1 What is the computation manner of the Transformer before the hypothesis estimation step? In line 237, the authors state that "then we perform transformer five times at M_1". Does the transformer contain some cross-attention processes? Or does it only involve the self-attention?
1.2 The computation process of the hypothesis estimation step is confusing. The authors seem to use the notations "M_1, M_2, M_3, f_i^1" to represent the features of both the source and target images. Such definitions make many statements in Sec. 3.2 hard to understand. For example, the meaning of the variable M_1 seems inconsistent in the statements "For each unit i on M_1" (line 144) and "within the neighborhood of the target units on M_1" (line 155).
1.3 The computation architecture of the segmentation step is unclear. The authors should provide the computation details on how to predict the classification confidence from the intermediate features. The statement in lines 177-180 is too brief to understand.
1.4 Why is the classification label in the segmentation step termed as the "pseudo" ground truth (line 181)? To my understanding, the classification label of every unit should be obtained from the ground truth camera pose and depth. It should be the "normal ground truth" rather than the "pseudo ground truth" if the above understanding is correct. The authors should clarify this detail.
**2. Some details should be further clarified.**
2.1 The statement "while we only need to feed 300" (line 50-51) was not clarified in the subsequent text. Is it the unit number on the 1/32 resolution for a 640x480 input image?
2.2 In the segmentation step, what is the size of output segmentation results for a local input window whose size is 3x3? Is it 4x4? Or 12x12?
2.3 The statement in lines 304-305 should be further discussed. The authors just state that ETO is better on the more difficult YFCC100M without discussing the probable reason.
2.4 The title "Hypotheses Estimation" (line 137) should be "Hypothesis Estimation".
**3. Some intermediate results should be visualized.**
Some real intermediate results after the hypothesis estimation/segmentation steps should be visualized to show how these steps provide appropriate predictions. The virtual intermediate "results" in Figure 3 are helpful to understand, but the actual results are still necessary.
Technical Quality: 3
Clarity: 2
Questions for Authors: Please provide more discussions and experimental results to address the above weaknesses.
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 4
Limitations: The authors have discussed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your review and valuable comments.
1.1 What is the computation manner of the Transformer before the hypothesis estimation step? In line 237, the authors state that "then we perform transformer five times at M_1". Does the transformer contain some cross-attention processes? Or does it only involve the self-attention?
No, we only use this strategy for the fine level, because this approach cannot cope with the need to iterate the transformer multiple times, and it only manages to improve the efficiency without compromising the accuracy if it is only iterated once. This drawback stems from the fact that it only needs to optimize the feature that is used to estimate the refined bias, which is only one out of every 16 features on the source image on the $M_3$ feature map according to our method, and the features on the target image are not optimized in this uni-directional attention operation. Therefore, the scope of this uni-directional attention can only be used in refinement stage, and it is not suitable for the scenarios that require multiple iterations such as coarse level.
1.2 The computation process of the hypothesis estimation step is confusing.
We are trying here to use $a$ and $i$ to represent the units on the target images and source images respectively, and then $a_i$ to describe the unit on the target image that corresponds to unit $i$ on the source image. Since you find this confusing, we will add an $s$ or $t$ annotation in the upper right corner, for example: $f_k^{3s}$ and $f_k^{3t}$
1.3 The computation architecture of the segmentation step is unclear. The authors should provide the computation details on how to predict the classification confidence from the intermediate features. The statement in lines 177-180 is too brief to understand.
Segmentation refers to the classification of each unit, where we determine which homography hypothesis should be adopted for unit j on $M_2$ through classification. The way to obtain the classification result is by comparing the classification score matrix $C_j$ of unit $j$ for different hypotheses $H_i$, where the largest one is the result of our classification operation. This classification uses the concept of multi-label classification, a method widely applied in detection problems. Therefore, we refer to DETR and use focal loss to optimize segmentation here. We can describe the process of obtaining the classification score matrix $C_j$ in the form of a formula: $C_{ji} = (T(f_i) + P(i), f_j)$, where $C_{ji}$ refers to the matching score of unit j for hypothesis i. $T$ refers to the function that converts the feature dimension of i (256 dimensions) to the feature dimension of j (128 dimensions); here, we use a 2D CNN to perform $T$. $P$ refers to positional embedding, which directly represents the relative position of the unit corresponding to the hypotheses i in the local 3*3 units. And ( * , *) indicates the inner product. We will add this part to the supplementary materials later.
1.4 Why is the classification label in the segmentation step termed as the "pseudo" ground truth (line 181)? To my understanding, the classification label of every unit should be obtained from the ground truth camera pose and depth. It should be the "normal ground truth" rather than the "pseudo ground truth" if the above understanding is correct. The authors should clarify this detail.
Here we use the term 'pseudo ground truth' merely because the annotations are computed in real-time. However, considering that the term 'pseudo-labels' is generally used for labels obtained from the predictions of neural networks, this usage is indeed incorrect and your are right. We will correct this term to 'computed groundtruth'.
2.1 The statement "while we only need to feed 300" (line 50-51) was not clarified in the subsequent text. Is it the unit number on the 1/32 resolution for a 640x480 input image?
300 indeed refers to the number of units obtained at 1/32 resolution on a $640 \times 480$ resolution image. We will change this to: 'Previous methods feed $80 \times 60$ tokens to the transformer with 1/8 resolution, while we only need to feed $20 \times 15$ with 1/32 resolution.' This will be easier to understand.
2.2 In the segmentation step, what is the size of output segmentation results for a local input window whose size is 3x3? Is it 4x4? Or 12x12?
The output of the segmentation part is an H/8×W/8 array, with values ranging from 0 to 8. This number determines which local hypothesis should be adopted to calculate the input for the refinement step.
2.3 The statement in lines 304-305 should be further discussed. The authors just state that ETO is better on the more difficult YFCC100M without discussing the probable reason.
All of the methods we tested achieved lower metrics on the YFCC dataset. From this perspective, we consider the YFCC dataset to be more challenging.
2.4 The title "Hypotheses Estimation" (line 137) should be "Hypothesis Estimation".
Thank you for pointing out the typos. We will correct the typos later.
3.1 Some real intermediate results after the hypothesis estimation/segmentation steps should be visualized to show how these steps provide appropriate predictions. The virtual intermediate "results" in Figure 3 are helpful to understand, but the actual results are still necessary.
We provide the visualization results in the pdf of "Author Rebuttal".
---
Rebuttal Comment 1.1:
Comment: Thank the authors for the additional experimental results and discussions. I think the proposed approach is valuable, considering the new technical designs and the good balance of accuracy and efficiency. Therefore, I keep my original rating (Accept). | Summary: The authors propose a local feature matching method that leverages homography to accelerate the transformer-based feature matching pipeline. Additionally, they employ unidirectional cross-attention in the refinement stage to further reduce computational overhead. Experimental results demonstrate the efficiency and effectiveness of this approach.
Strengths: 1. The paper is well-written and easy to understand.
2. Integrating homography as theoretical guidance into the transformer pipeline is a commendable approach, enhancing the transformer-based pipeline with theoretical support.
3. Experimental results demonstrate that the proposed method achieves much smaller time usage, validating its efficiency in practice.
Weaknesses: 1. Though the time usage is decrease, the accuracy is also decrease from Tab.1&2&3.
2. The paper lacks significant citations in feature matching methods, such as Efficient LoFTR[1], RoMa[2]. Some of these works also focus on improving efficiency in feature matching and should be referenced to provide a comprehensive background. Including these methods in the experiments would offer a more thorough comparison of the proposed approach's performance.
3. Though the authors acknowledge that some other methods (e.g., [14, 38]) are better in certain aspects (line356-357), it would be beneficial to include a comparative analysis with these methods. This would provide a clearer understanding of the strengths and weaknesses of the proposed approach.
[1]. Wang, Yifan, et al. "Efficient LoFTR: Semi-dense local feature matching with sparse-like speed." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024.
[2] Edstedt, Johan, et al. "RoMa: Robust dense feature matching." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024.
Technical Quality: 3
Clarity: 3
Questions for Authors: No question to the authors.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors have addressed the limitations in their paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your review and valuable comments.
1. Though the time usage is decrease, the accuracy is also decrease from Tab.1&2&3.
Yes, you are right. But we claim that the huge improvement in runtime is valuable in realtime applications such as robotics or SLAM.
2. The paper lacks significant citations in feature matching methods, such as Efficient LoFTR, RoMa. Some of these works also focus on improving efficiency in feature matching and should be referenced to provide a comprehensive background. Including these methods in the experiments would offer a more thorough comparison of the proposed approach's performance.
We add a couple of experiments on Megadepth dataset in Author Rebuttal materials. Here, to be mentioned that our paper is contemporaneous with the Efficient LoFTR. The accuracy of Efficient LoFTR is a bit better than LoFTR, while the runtime of Efficient LoFTR is 56.9 ms, much faster than LoFTR, which is 93.2 ms, however, it is much slower than our 21 ms . And while PATS gets extremely good results, it is almost 100 times slower than our method.
3. Though the authors acknowledge that some other methods (e.g., [14, 38]) are better in certain aspects (line356-357), it would be beneficial to include a comparative analysis with these methods. This would provide a clearer understanding of the strengths and weaknesses of the proposed approach.
The same as Q.2
---
Rebuttal Comment 1.1:
Comment: Thanks authors' experimental results. However, I noticed that my request for a comparison with RoMa was not addressed in your response. RoMa is another relevant SOTA method which should be included into comparison.
---
Rebuttal 2:
Title: Comparing ETO with Roma and Tiny-RoMa on Megadepth.
Comment: | Method | auc@5 | auc@10 | auc@20 | runtime on RTX 2080ti(ms) |
| :---------: | :-------------: | :-------------: | :-------------: |:-------------: |
|RoMa | 64.8 | 77.4 | 86.1 | 688.8|
|Tiny-RoMa| 36.2 | 53.6 | 67.5 | 29.0 |
| ETO | 51.7 | 66.6 | 77.4 | 21.0 |
The experiments show that when efficiency and accuracy are all taken into account, ETO has advantages over the state-of-the-art methods. Note that tiny-roma's demo on github uses a very strong set of ransac parameters (ransacReprojThreshold=0.2, method=cv2.USAC_MAGSAC, confidence=0.999999, maxIters=10000) , so it's not surprising that the results decrease when using settings consistent with ours.
---
Rebuttal Comment 2.1:
Comment: Thank you for the additional comparisons with RoMa and the discussion provided. This effectively addresses my concerns. Your work is both interesting and effective, and I will maintain my initial rating. | Summary: This paper proposes a novel framework for efficient transformer-based local feature matching. Transformer-based local feature matching usually contains two stages: a coarse matching stage which applies self-attention and cross-attention on coarse-level features (usually H/8 x W/8) to obtain coarse matches, and a refinement stage that refines the coarse matches in a local window based on fine-grained features. This paper proposed two methods to improve the efficiency of this pipeline. First, the coarse matching is performed on a even coarser level (H/32 x W/32) to reduce the computational cost of the costly attention operations. Additionally, homography hypothesis for each patch is estimated at this level. Leveraging the piece-wise smooth prior, the matching at H/8 x W/8 resolution is directly approximated by selecting the most probable homography hypothesis in local windows using a proposed segmentation technique. Second, in the refinement stage, the bi-directional cross attention is reduced to uni-direction one. Experiments show that the proposed framework achieves comparable performance with less inference time compared with state-of-the-art transformer-based and local feature matching algorithms.
Strengths: 1. The idea of using piece-wise smooth prior to accelerate transformer-based feature matching is novel and promising, and the supervision and the homography hypothesis re-selection algorithms are carefully designed.
2. Comprehensive ablation studies show that the key design features, including homography hypothesis proposal, homography hypothesis re-selection and uni-directional cross attention have positive effects on the performance and are necessary.
Weaknesses: 1. The symbols used in the method part are too complicated, making it hard to read.
2. The symbol $\mathscr{H}$ used in figure 4 is undefined in the text.
3. The framework adopts two-stage training, possibly making it hard to train and limiting the accuracy.
Technical Quality: 3
Clarity: 2
Questions for Authors: 1. Do you have any plan to reorganize the symbols?
2. If the framework is trained end-to-end, will it yield acceptable performance?
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: The authors have adequately addressed the limitation of this work in the conclusion and limitations section of the paper. Enabling the network to be trained end-to-end, allowing for dense matching and further improving accuracy are my suggestions for directions of improvement.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your review and valuable comments.
1. The symbols used in the method part are too complicated, making it hard to read.
Thank you for pointing this out and I'm so sorry for it.
2. The symbol $\mathscr{H}$ used in figure 4 is undefined in the text.
Thanks for pointing this out, the $\mathscr{H}$ here refers to the output of segmentation, which is the set of homography transformations that each unit on $M_2$ is ultimately assigned to, we will add this explanation.
3. The framework adopts two-stage training, possibly making it hard to train and limiting the accuracy.
Yes, an end-to-end pipeline may improve our method. But we don't have enough computing resources to complete it. We will try this idea later in the next work.
4. Do you have any plan to reorganize the symbols?
Maybe I can directly add $s$ and $t$ to the features, from $f_k^3$ to $f_k^{3s}$ and $f_k^{3t}$ , it will be more clear.
5. If the framework is trained end-to-end, will it yield acceptable performance?
Then the batch size will be very small and make the process of training very slow, maybe an end-to-end pipeline will improve our method, but I don't have enough resources to complete it.
---
Rebuttal Comment 1.1:
Comment: Thank you for providing the additional experimental results and discussions. The proposed method demonstrates strong accuracy and efficiency, with novel and well-reasoned technical designs. The authors have addressed my questions and have outlined a plan to clarify the symbols. Therefore, I will maintain my original rating of "Weak Accept." | Summary: This paper presents a local feature matching method based on the multiple homography hypotheses. The paper explicitly introduces the homography hypotheses for coarse matching, combined with homography segmentation, cross-attention, and sub-pixel refinement to obtain fine matching results. The introduction of the homography hypotheses significantly reduces the number of input tokens for the attention mechanism, and with further model modification, the algorithm proposed in this paper has achieved a substantial increase in inference speed while maintaining performance as much as possible. Experiments on multiple datasets have proven the effectiveness and efficiency of the proposed method.
Strengths: Clear motivation, reasonable structural design, convincing experiments
Weaknesses: -The pre-compiled transformer model may have some impact on inference speed, can additional tests be conducted on the efficiency of a normal transformer module?
-Lack of the latest state-of-the-art comparison: Some of the latest methods, such as RoMA[1] and EfficientLoFTR[2], have been made public at an early stage, and thus it is necessary to compare and analyze with these methods.
[1] Edstedt, Johan, et al. "RoMa: Robust dense feature matching." CVPR 2024.
[2] Wang, Yifan, et al. "Efficient LoFTR: Semi-dense local feature matching with sparse-like speed." CVPR 2024.
Technical Quality: 3
Clarity: 3
Questions for Authors: -In Section 3, it is 2-stage training adopted, but in Implementation Details, the training phase contains 3 stages. Why there is a joint training stage at first? Moreover, the joint training is usually conducted for the final finetuning, why it is the first stage in your strategy?
-For Basic Refinement w/ Segmentation in Section 4.4, why it is the same training hours instead of the same training samples? Early stopping may lead to performance drop.
-In the introduction, the authors claim that the multiple self- and cross-attention in the fine-level stage are redundant, is there any numerical results? What if you stack several uni-directional cross-attention in your method?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: see Weaknesses
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your review and valuable comments.
1. The pre-compiled transformer model may have some impact on inference speed, can additional tests be conducted on the efficiency of a normal transformer module?
Without the pre-compiled transformer model, the runtime of ETO will be delayed from 21.0 ms to 32.8 ms for the images of 640*480. While the runtime is almost the same for LightGlue.
2. Lack of the latest state-of-the-art comparison.
To be mentioned that our paper is contemporaneous with the Efficient LoFTR. The accuracy of Efficient LoFTR is a bit better than LoFTR, while the runtime of Efficient LoFTR is 56.9 ms, much faster than LoFTR, which is 93.2 ms, however, it is much slower than our 21 ms . And while PATS gets extremely good results, it is almost 100 times slower than our method. More detailed results on Megadepth dataset can be found in the supplementary table inside Author Rebuttal.
3. In Section 3, it is 2-stage training adopted, but in Implementation Details, the training phase contains 3 stages. Why there is a joint training stage at first? Moreover, the joint training is usually conducted for the final finetuning, why it is the first stage in your strategy?
Our purples is to handle multi-resolution images better, which is of great significance for real-world applications, although it is not demonstrated in standard experimental data in this paper.This step is placed in the first stage rather than the second, this is because the different resolutions of the images mean that in the coarse matching stage one has to deal with data inputs of variable sizes, which has a relatively large impact on the results. Correspondingly, the fine matching stage can use inputs of fixed size from the coarse stage and is therefore inherently more robust to resolution changes.
4. For Basic Refinement w/ Segmentation in Section 4.4, why it is the same training hours instead of the same training samples? Early stopping may lead to performance drop.
We try to demonstrate our superiority in training efficiency. While using the same training samples, the accuracy of this method will be 52.3/67.1/77.9 (auc@5/auc@10/auc@20), and the runtime is still 32.8 ms. While our accuracy is 51.7/66.6/77.4 (auc@5/auc@10/auc@20), and the runtime is 21.0 ms.
5. In the introduction, the authors claim that the multiple self- and cross-attention in the fine-level stage are redundant, is there any numerical results? What if you stack several uni-directional cross-attention in your method?
The numerical results are in given in Q.4, only a simple version of cross attention can get similar accuracy as complete transformer with self-attention and cross attention, while much faster.
---
Rebuttal 2:
Comment: Thanks for the response.
Although most of my quetions are solved. There are some questions ignored by the authors.
1. In Q3, I wonder if stacking some more the proposed uni-directional cross-attention would be helpful and makes the method more powerful.
2. The authors compared with PATS instead of RoMa as I mentioned in W2. Additionally, there are also some other reviewers mentioned RoMa, I don know why the author try to avoid comparing with RoMa. In my experience, RoMa runs about ~180ms/Frame on MegaDepth with RTX3090, and there is a tiny version for RoMa. Acoording to the paper and official repo, the accuracy for RoMa (62.6/76.7/86.3) and Tiny-RoMa (56.4/69.5/79.5) are much better than the proposed method (although it is faster).
The author's evasiveness heightens my concern. I do not know if stacking the proposed uni-directional cross-attention is useless. And I do not know why ignoring RoMa, which has been open-sourced for a long time. However, it is still an interesting work, I will maintain the original rating (Weak Accept).
---
Rebuttal 3:
Title: Experiment on two uni-directional cross attention and comparing ETO with Roma and Tiny-RoMa on Megadepth.
Comment: | Method | auc@5 | auc@10 | auc@20 | runtime on RTX 2080ti(ms) |
| :---------: | :-------------: | :-------------: | :-------------: |:-------------: |
|RoMa | 64.8 | 77.4 | 86.1 | 688.8|
|Tiny-RoMa| 36.2 | 53.6 | 67.5 | 29.0 |
| ETO with 2 uni-directional attention | 52.0 | 66.6 | 76.8 | 22.7 |
| ETO | 51.7 | 66.6 | 77.4 | 21.0 |
The experiments show that when efficiency and accuracy are all taken into account, ETO has advantages over the state-of-the-art methods. Note that tiny-roma's demo on github uses a very strong set of ransac parameters (ransacReprojThreshold=0.2, method=cv2.USAC_MAGSAC, confidence=0.999999, maxIters=10000) , so it's not surprising that the results decrease when using settings consistent with ours.
And ETO with 2 uni-directional attention only make a littile difference on performance which is not important,.We consider that it is because the uni-directional attention only manages to improve the efficiency without compromising the accuracy if it is only iterated once. This drawback stems from the fact that it only needs to optimize the feature that is used to estimate the refined bias, which is only one out of every 16 features on the source image on the $M_3$ feature map according to our method, and the features on the target image are not optimized in this uni-directional attention operation. Therefore, more uni-directional attention can not make great improvements for our method.
---
Rebuttal 4:
Comment: Thanks for the response, which partly addressed my question.
According to the response, stacking the uni-directional attention achieves improvement on auc@5 but get degreation on auc@20 (both are small), and it seems that you did not re-train a new model (due to the rebuttal schedule) but simply reuse the attention parameters.
When it comes to the comparison with RoMa and Tiny-RoMa, thanks for your insight of the experiment settings. As the metric is the accuracy of camera pose ,which only need several matches (eg. 8 matches) to regress, it is acceptable to filter out less-reliable matches when computing camera poses. Maybe you can also test your method with a strict setting (just a suggestion, not required).
Nevertheless, this work is interesting and provides a effective and efficient mothod for 2-view matching.
---
Rebuttal Comment 4.1:
Comment: We only re-train the part of refinement with two uni-attention blocks, which only takes us 12 hours, and we do not simply reuse the attention parameters.
However, here we just stack two uni-attention blocks and supervise them once, perhaps supervising the residuals for each of them can make meaningful improvement.
Finally, thank you again for your high opinion and useful suggestions for our work. | Rebuttal 1:
Rebuttal: Thank you for your review and valuable comments.We will correct the problems mentioned by the reviewers.
Here we add some comparative experiments on some recent methods, compared with these methods, ETO still has a high advantage in efficiency. We then provide a graphical representation of how our segmentation and homography hypotheses work.
Pdf: /pdf/894c684778a3f8064e6eaa82cb9579a89ad20c29.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | Summary: This paper propose an efficient framework to reduce the computational load of transformer-based matching approaches. It is a coarse-to-fine two-stage solution. During the coarse matching phase, multiple homography hypotheses are estimated to approximate continuous matches. Each hypothesis encompasses few features to be matched, reducing the number of features that require enhancement via transformers. In the refinement stage, the unidirectional cross-attention is proposed to replace the classical bidirectional self-attention and cross-attention mechanism, which further decreases the cost of computation. Comprehensive evaluations on other open datasets such as Megadepth, YFCC100M, ScanNet, and HPatches are presented to demonstrate its efficacy.
Strengths: The whole idea makes sense to me. The experimental studies are quite comprehensive.
Weaknesses: Some parts of the description are quite vague to me, which may hinder its reproducibility. Some of the experimental results may need further clarification.
Technical Quality: 3
Clarity: 2
Questions for Authors: 1. I think some descriptions in Sec 3.4 need further clarification, e.g., Line 192-193 $\hat{f^3_j}$ is computed by querying the $f3_k$ with …, is the query done by nearest neighboring searching?
2. How is $\triangle P_j^t$ in Line 188 obtained or is it predicted?
3. In Figure 3, is there an MLP after cross-attention block? If yes, what is the input/output. Also, the presence of “Module 3” is confusing, should it contain the cross attention?
4. In Table 4, it seems the usage of bidirectional cross-attention performs worse than that of uni-directional cross-attention at 5. Is that reasonable, please give some analysis.
5. From Table 2, the performance of lightGlue is much worse than ASpanFormer, LoFTR. Interestingly, ASpanFormer is better than LoFTR with a small margin in the Megadepth dataset, similar to the results reported in LightGlue. The performance gap between LightGlue and LoFTR is much larger than that in LightGlue, could you please explain? Moreover, why LightGlue is not reported in the dataset of Scannet?
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: 1. Some of the network architectures are not depicted in figures, making it difficult to know the overall diagram of the method.
2. The two-stage training process may not be easily trained.
3. It would be better to present some failure case studies.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your review and valuable comments.
1. Some descriptions in Sec 3.4 need further clarification.
Yes. Due to space limitations, we have included a more intuitive description and illustrations of this step in Sec.2 in the supplementary materials. Specifically, since the corresponding point $P_j^s$ on the source image is self-chosen, following LoFTR and for computational convenience, we select a point $P_j^s$ at the center of each specific feature $\hat{f}_j^3$ on the $M_3$ feature map. For the target image, after calculating the initial $P_j^t$ coordinates through the accepted $H$ hypotheses, we query the 7*7 features around $P_j^t$ with $\hat{f}_j^3$. We then perform cross attention between this single feature from the source image and the 49 features from the target image. If we were to use traditional methods to perform a complete transformer, we would need a 49*49 transformer to achieve such a large receptive field, but here we only need a 1*49 transformer.
2. How is $\Delta P_j^t$ in Line 188 obtained or is it predicted?
After refining $\hat{f}_j^3$ using the above method, we follow traditional methods (such as LoFTR) to correlate this feature with the 49 features on the target image, we obtain a local heatmap representing the matching probability for each feature. By computing the expectation over this probability distribution, we can obtain the final offset for matches.
3. In Figure 3, is there an MLP after cross-attention block? If yes, what is the input/output. Also, the presence of “Module 3” is confusing, should it contain the cross attention?
Here, Module 3 refers to the entire refinement part, including the cross-attention part. The MLP shown in the figure follows traditional methods(like PATS, ASpanFormer or LoFTR) in the module of transformer, passing the feature through an MLP module before computing the expectation. Its function is to further enhance the $\hat{f}_j^3$ mentioned in Section 3.4.
4. In Table 4, it seems the usage of bidirectional cross-attention performs worse than that of uni-directional cross-attention at 5. Is that reasonable, please give some analysis.
This is because we are using the same training time rather than the same number of training samples as the comparison criterion, which makes a small but not significant difference to the effectiveness of the model. If we had used the same training samples, auc@5 would have achieved 52.3, which is better ours though while still much slower.
5. From Table 2, the performance of lightGlue is much worse than ASpanFormer, LoFTR. Interestingly, ASpanFormer is better than LoFTR with a small margin in the Megadepth dataset, similar to the results reported in LightGlue. The performance gap between LightGlue and LoFTR is much larger than that in LightGlue, could you please explain? Moreover, why LightGlue is not reported in the dataset of Scannet?
It is important to note that the LightGlue employs two RANSAC methods in their paper: Lo-RANSAC and OpenCV's RANSAC. We use OpenCV's RANSAC exclusively. In the experiments of the LightGlue, it is evident that under OpenCV's RANSAC, LightGlue's performance significantly decreases. Additionally, to better demonstrate the impact of sub-pixel precision, we use a RANSAC threshold of 0.25 in outdoor scenarios. Under this threshold, LoFTR, ASpanFormer, and our method ETO achieve smaller errors within this precision range, further widening the gap with LightGlue. The reason LightGlue was not tested on ScanNet is that its model was not trained on indoor scenes, which would be unfair.
6. Some of the network architectures are not depicted in figures, making it difficult to know the overall diagram of the method.
Due to space limitations, we have placed the illustrations of the refinement steps in the supplementary materials.
7. The two-stage training process may not be easily trained.
Yes, an end-to-end pipeline may improve our method. But we don't have enough computing resources to complete it.
8. It would be better to present some failure case studies.
There exists a shortcoming of our approach, which is that in the case of very complex planar assemblages, we provide too few planar proficiencies, and it is also more difficult to distinguish. It can be found in the Figure.1 of our Author Rebuttal.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response.
While I appreciate the addition of valuable experiments and the inclusion of a failure case study, I still believe that the current version of the manuscript lacks sufficient descriptions of some technical details. As a result, I will be maintaining my initial rating. | null | null | null | null | null | null |
Higher-Order Causal Message Passing for Experimentation with Complex Interference | Accept (poster) | Summary: The paper considers the problem of total treatment effect estimation under interference, when the interference structure is unknown. It proposes an estimation approach based on fitting a data-driven model for low-dimensional dynamics of the mean and variance of observed experimental outcomes over time, motivated by state evolution dynamics. These state evolution equations come from approximate message-passing (AMP) applied to the potential outcome dynamics chosen in Equation 2. To produce total treatment effect estimates, initial means and variances are observed from the experiment and propagated forward via the learned state evolution model.
Strengths: - The paper investigates an important unsolved problem in causal inference.
- The data-driven model of the state evolution equations does not rely upon exact functional forms of the dynamics and instead attempts to learn the appropriate dynamics from the observed data. This increases methodological flexibility by allowing a portion of the dynamics, in addition to the interference, to remain unknown.
- The empirical analysis includes experiments beyond the setting under which the estimator is derived ( e.g. interference occurs across binary edges, the outcome generating process used, and staggered rollout design), increasing the relevance for practice. Improvements are seen over the few available baselines for the unknown interference setting.
Weaknesses: - The authors should clarify the scope of their algorithm. Equation 2 defines specific dynamics and does not meet the generality emphasized in the introduction and abstract around equilibrium and interference. These dynamics do not obviously cover what may be occurring in real-world systems (e.g. lacking memory and being additive). The authors motivate their paper as relaxing assumptions, but introduce their own alternative assumptions in Section 2.1. The necessity of those assumptions is also undiscussed, leaving the range of applicability unclear. For example, the interference matrix is assumed iid Gaussian but the experiments contain binary edges.
- While past research Causal Message Passing is cited prominently in the paper, the paper could use more discussion over what is novel and different here. To my understanding, the novel contribution is primarily the HO-CMP algorithms introduced in Section 3. While the experiments demonstrate improvements by using HO-CMP, the idea of data-driven modeling of the state evolution equations appears a relatively straightforward extension.
- The HO-CMP algorithms in Table 1 only leverage simple feature specifications that are low-dimensional.
- Algorithm 1's computational complexity and scalability is unaddressed.
- Overall the paper could use increased clarity. Many complex ideas are mentioned and occasionally described in insufficient detail. For example, the authors do not justifying the necessity of considering state evolution instead of modeling Equation 2 directly.
Technical Quality: 3
Clarity: 2
Questions for Authors: 1. Why only consider the mean of W_t as a feature in Table 1?
2. Equation 3: sums should be over i not n?
3. What are the initial conditions of the counterfactual estimation part of Algorithm 1?
4. In Equation 2, are the diagonal entries (i=j) treated the same as the off-diagonal entries? What is the motivation for this?
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: Limitations are adequately addressed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3
Code Of Conduct: Yes | Rebuttal 1:
Comment: We thank the reviewer for carefully reading our paper and providing insightful comments. Below we provide responses to the weaknesses and questions you raised.
Response to Weaknesses:
Thanks for helping us clarify the scope of our algorithm. In the introduction, we highlighted our algorithm can generalize the existing estimation methods under network interference in two aspects. First, our algorithm efficiently uses the data before the system reaches equilibrium. Second, our algorithm does not require knowledge of the network. Indeed Equation (2) encompasses these two aspects. We use data before equilibrium so that there are T periods in total (not just the last period at equilibrium). Moreover, we do not require the knowledge of the interference matrix G in Equation (2).
In addition, the properties of lacking memory and being additive can be covered in function g_t(). Specifically, lacking memory happens when the coefficient of Y_t^j is zerol and the coefficient of W_0^j, …, W_t^j are zerol. Being additive occurs when g_t() is a linear function of Y_t^j, W_0^j, …, W_t^j.
The Gaussian assumption of the interference matrix is imposed to theoretically prove the state evolution in Equation (4). However, a key point of this paper is to show that the treatment effect estimation based on state evolution is accurate for general graphs, including the cases where the interference matrix does not follow the Gaussian distribution. Therefore, in the experiments, we consider the graphs with binary edges and illustrate the benefit of using our proposed method.
Thanks for helping us be more clear about our contribution as compared to [SB23]. It is true that the main contribution is in Section 3 that provides more general and more accurate estimation methods of treatment effects. Our algorithm is more general in the sense that we allow for any number of \pi’s, whereas [SB23] only accounts for two different \pi’s in the randomized experiment. Our algorithm is more accurate because of two conceptual differences with respect to [SB23]. First, we use the state evolution of both the first and second moments (\nu_t and \rho_t^2), while [SB23] only uses the state evolution of the first moment. Second, we use a data-driven approach to learn g_t(), while [SB23] considers a linear specification of g_t().
You are right that the feature specification in Table 1 only include first and second moments. We have explored the inclusion of higher-order moments, but the variance and MSE are generally higher than the three versions of HO-CMP in Table 1.
Our algorithm runs in time O(NT). Once the mean and variance are calculated for every time t (that requires linear runtime in both N and T) in the step 1, the remaining steps only use the mean and variance. Therefore the algorithm is scalable with N (as in practice N is generally much larger than T).
Thanks for helping us be more clear. The state evolution equations are derived based on Equation (2) that hold for a broad class of unknown functions g_t() and unknown interference matrix G. As state evolution equations only involve summary statistics, they provide potential for developing efficient treatment effect estimation methods (especially, more efficient than [SB23]). This is exactly the problem studied in this paper.
Response to Questions:
W_t^j is binary. The higher-order moments are therefore the same as the mean.
You are right. We will correct it in the revision.
We set the initial conditions (i.e., \nu(0), \nu(1), \rho^2(0), and \rho^2(1) at time 0) as the mean and variance of observed outcomes at time 0 (i.e., \nu(0) and \nu(1) are set as the same; similarly, \rho^2(0) and \rho^2(1) are set as the same). We then compute \nu(0), \nu(1), \rho^2(0), and \rho^2(1) for time t=1, …, T. The difference between \nu(1) and \nu(0) is the estimated TTE at time t. As one can expect, the estimation of TTE is more accurate as t increases.
You are right that the diagonal and off-diagonal entries are treated the same. It is possible to treat them differently and then separately identify the direct and spillover effects. However, as our goal is to identify TTE, it is sufficient to consider the model in Equation (2) and then the estimated effect will be the sum of direct and spillover effects.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for the clarifications in the rebuttal, but my opinion on the paper overall is unchanged. | Summary: This paper discusses the causal effect estimation under spatiotemporal interference characterized by a dynamic system. The authors are devoted to improving the causal message-passing framework by adding interaction terms of inputs in the summary function $f_\theta$ (a linear regressor). Synthetic and semi-synthetic experiments are conducted to show the performance of the proposed method.
Strengths: 1. The authors explicitly present their methodology with clear definitions and notations.
2. The experiment results are summarized clearly.
Weaknesses: 1. The comparison with the most related work [1] is very important but scarce, and I think the marginal contribution is little.
The basic framework is all the same with [1], and the scanty innovation of this paper is the so-called "higher order", which is formulated as two more interaction terms and a square term. I don't think these additional input terms are necessary when given a non-linear model $f_\theta$, especially when the authors don't develop any theory that necessitates the simplicity of linear regression.
2. The literature review is disordered, and there are apparent misunderstandings for certain important works. I list some examples in the following.
- The taxonomy "Restrictions on interference structure" and "Treatment effect dynamics and temporal interference" overlap severely, wherein [2] studies the general temporal interference characterized by a MDP, while the authors list this paper with other two-sided marketplace paper in "Restrictions on interference structure".
- I don't understand why [3] and [4] are discussed in the category of "Partial interference", since these two papers are important and don't belong to this category.
3. More discussion on the basic setting is needed. I think neither the authors nor [1] discuss the concrete and applicable scenario of the dynamic system to show it's indeed meaningful, especially when they are agnostic to the network. I take network interference as an instance for the cross-sectional part. As the number of units increases, how the network topology change indeed impact the scenario, e.g. whether the degree of units increases with $n$ (Erdos-Renyi) or just keeps constant? The details behind the asymptotics indeed matter.
Ref.
[1] Sadegh Shirani and Mohsen Bayati. Causal message passing: A method for experiments with unknown and general network interference. arXiv preprint arXiv:2311.08340, 2023.
[2] Vivek Farias, Andrew Li, Tianyi Peng, and Andrew Zheng. Markovian interference in experiments. Advances in Neural Information Processing Systems, 35:535–549, 2022.
[3] Christina Lee Yu, Edoardo M Airoldi, Christian Borgs, and Jennifer T Chayes. Estimating the total treatment effect in randomized experiments with unknown network structure. Proceedings of the National Academy of Sciences, 119(44):e2208975119, 2022.
[4] Johan Ugander, Brian Karrer, Lars Backstrom, and Jon Kleinberg. Graph cluster randomization: Network exposure to multiple universes. In Proceedings of the 19th ACM SIGKDD international conference on Knowledge discovery and data mining, pages 329–337, 2013.
Technical Quality: 1
Clarity: 2
Questions for Authors: I have no questions.
Confidence: 5
Soundness: 1
Presentation: 2
Contribution: 1
Limitations: No, the limitations of this paper are discussed inadequately, as discussed in weaknesses.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3
Code Of Conduct: Yes | Rebuttal 1:
Comment: Thank you for reading our work. As outlined in our global response, this research extends the estimation method of [SB23] by adopting a more data-driven approach. We will carefully revise the manuscript to ensure that the unique contributions of our work are clearly discussed and distinguished.
Regarding the literature review, we first emphasize that all the mentioned works have made significant contributions to the network interference problem. Specifically, [FLPZ22] is an excellent work focusing on experiments in dynamical systems. However, as clarified by the authors, their approach considers systems where treating some units impacts others through a limiting constraint, which is a **special case** of interference structure between units. By "Restrictions on interference structure," we refer to works that address the interference problem within a particular setting. Meanwhile, "Treatment effect dynamics and temporal interference" refers to studies that examine variations in the treatment effect over time. Additionally, [YABC22] and [UKBK13] are both crucial papers that we cited in the section related to "Partial interference," even though they do not adhere to the partial interference assumption. Indeed, these works effectively highlight the challenges associated with the broad applicability of studies in the "Partial interference" category and propose innovative solutions to mitigate these issues. We will ensure that the presentation is clearer to avoid these types of confusion.
About the core of the theoretical foundations of Causal-MP which proves, under certain assumptions, as N grows, sufficient statistics (means and variances) of the outcomes evolve according to state evolution equations. We also provide empirical simulations under network structures and outcome specifications that are studied in the prior literature with diverse network topology (e.g. Linear-in-means on random graphs studied by Leung and we generalize it to non-linear specifications and on real network data). For additional details please see Section 5 of [SB23]. | Summary: This paper studies experimental interference, in which the treatment assignments of one unit affects the outcome of another. The majority of work in this area assumes interference acts through a network, and requires knowledge of that network to reduce bias in the resulting estimator. This paper takes a general approach in which interference occurs due to an unknown graph, and a unit's outcome can be affected by its neighbors' treatment status as well as their past outcomes. With such a flexible potential outcomes model, it is very difficult to estimate the total treatment effect in an unbiased way. The paper suggests an extension of the causal message passing algorithm that estimates the state evolution equations of the potential outcomes using higher-order functions of the past treatment status and outcomes, as opposed to the original CMP paper which used linear models of these quantities.
Strengths: The paper provides thorough experimental validation of the proposed method. In particular, the method is evaluated on simulated and real graphs, and is compared against three baselines including Cortez et al's polynomial fit method. The experiments are simulated under the staggered rollout design, which is of interest in the academic literature as well as in practical industry settings.
The ideas in the paper are presented well, with a high quality of writing throughout.
Weaknesses: If I understand correctly, this work extends the CMP algorithm of Shirani and Bayati by using nonlinear estimators of the state dynamics. It is unclear to me that this delta is a significant enough improvement over the original CMP method to merit publication at Neurips.
I reviewed the Shirani and Bayati paper, and am concerned at how closely the writing, structure, and in some cases individual sentences of this paper parallel that work. I will leave it to the AC to make any recommendations about copyright / academic integrity, but here are the passages of most concern:
1. The phrase "Inspired by the literature on AMP algorithms, we refer to (4) as state evolution equations of the experiment" appears verbatim in the original CMP paper.
2. The related works of this paper appears to be a reworded and edited version of a subset of the related works of the CMP paper. In particular, the paragraphs beinning on lines 124 and 134 almost exactly mirror their counterparts in the CMP paper.
Technical Quality: 3
Clarity: 4
Questions for Authors: 1. Is the HO-CMP-I algorithm the same as that of Shirani and Bayati? If not, I would suggest including the original CMP algorithm as a baseline.
2. [Authors do not need to provide new experiments; discussion would suffice.] The HO-CMP algorithm outperforms polyfit as shown in Figure 1. It's unclear to me if this is due to increased regularization (maybe a 2-degree polynomial provides a better approximation to a half-sine than a 4-degree polynomial on the data in Figure 1) or an inherent property of the HO-CMP algorithm (maybe the claim is that not all types of interference are captured by the polyfit algorithm?). I would be curious if the authors had some intuition here, perhaps an example where polyfit wasn't obviously overfitting, but HO-CMP still outperformed polyfit.
Confidence: 3
Soundness: 3
Presentation: 4
Contribution: 1
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3
Code Of Conduct: Yes | Rebuttal 1:
Comment: Thank you for your thoughtful comments. Indeed, the reviewer is right, and as explained in our global response, this work builds on the foundation laid by [SB23] by extending their estimation method. We would also like to clarify that there was no intention to mirror the writing of [SB23]. However, it was necessary to discuss the problem setup and relevant literature. In the revision, we will ensure that the differences in the writing are clear and that the unique contributions of this work are highlighted.
To compare HO-CMP and PolyFit, it is important to note that HO-CMP utilizes more data by incorporating the evolution of the experiment over time. In contrast, PolyFit relies solely on the outcome observation at the equilibrium point where the treatment effect stabilizes. As a result of this additional data, HO-CMP can effectively mitigate the issue of overfitting that often affects PolyFit. Additionally, HO-CMP-i can be considered as a generalization of [SB23] estimator, allowing for more than two stages in the experiment.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response. | Summary: The authors motivate a regression framework using Causal Message Passing to estimate global average treatment effects under some unknown forms of interference. The simplest way to think about their proposed algorithm is that CMP is used to motivate particular sufficient statistics for the interference dynamics which are fed into a regression model to perform corrections and enable estimation.
Strengths: The CMP framework is, I think, a very interesting direction for research. In particular, it's a very important direction to weaken the kinds of assumptions about interference necessary to make when working with experiments.
I also think there's something convenient in working with high-level statistics from experiments (for example, many large companies' experimental systems makes these particularly easy to retrieve, so reflecting that makes it easier to integrate proposed methods).
Weaknesses: The biggest problem with this paper is that it doesn't actually set out theoretical scope conditions for when it works. I'll be a bit more specific on two points:
1. The combination of the definition of TTE_t as the large sample limit combined with HO-CMP-I under Bernoulli randomization. Suppose W_t is determined by Bernoulli randomization (with a fixed probability, \pi across all time periods). Then as N \to \infty, then \bar{w}_t \to \pi. This means that the relationship between \bar{w}_t and \hat{\nu}_t will always be exactly zero. Even under the useful linear form discussed on line 228, it appears that this method could not possibly learn anything effectively. I suspect, in fact, that this would also be problematic in finite sample sizes (see below notes on evaluation). Critically, however, you haven't spelled out any of these scope conditions, so the paper does not make clear what is necessary to achieve success. And I will reiterate that this is using the very asymptotics gestured to in the paper and an experimental design which is explicitly discussed.
2. More specifically, I will return to line 228, which gestures at a generative model under which the proposed method (HO-CMP-I) should work well. But what does this mean? Should we expect the estimator to be unbiased? Should we expected it to be BLUE? Should we expect it to be semi-parametrically efficient? It would be ideal to have theoretical results around this, but barring that, these questions should be addressed directly and explicitly in the experiments.
On the subject of the experiments, I want to see more specifics than the charts you've provided give. Are the estimators unbiased? What are their MSEs? How do these evolve as a function of sample size? What are their convergence rates? In the absense of theoretical results, it is critical to have clear answers to these kinds of questions.
Some minor points that I don't think matter much at all, but will share regardless:
- I think "higher-order" is an overly confusing way to define this approach. When I think higher-order, I think about something like a Taylor series: using more granular information about the setting to define a better estimator. In contrast, your approach is doing something more akin to higher-_aggregation_. I would consider using a term like "higher-level" or something which connotes this kind of aggregation.
- On line 211 you refer to "multi-label" ML models, but I think you mean "multi-task" models or "multivariate" models.
- What you've done here doesn't seem obviously like "machine learning." While I have no problem with linear regression, what it appears that you've done here
- line 264: you don't need to tell the reader what \pi is :)
Technical Quality: 3
Clarity: 3
Questions for Authors: see above
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: see above
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Comment: 1. We thank Reviewer p2A8 for helping us make this point more clear. The framework of HO-CMP enables us to learn an arbitrary functional form of \nu_{t+1}(w). In HO-CMP-i, we specifically model this as a linear function of \bar{w}_{t+1}. It is important to note that \pi varies during the experiments. For example, in the setting of Figure 2, we set (\pi^{(1)}, \pi^{(2)}, \pi^{(3)}, \pi^{(4)})=(0.1,0.2,0.4,0.5) for four intervals of duration 10, each. This approach allows us to learn the coefficient of \bar{w}_{t+1} in the linear function.
2. As we explained in the global response, the primary focus of this paper is to develop estimators and validate their performance in practical applications rather than provide theoretical analysis. The reason is that the theoretical treatment of valid questions (such as the ones raised by the reviewer) are highly challenging in this setting. We need to adjust our expectations as in the setting studied here (involving pervasive interference and unknown network structures) it is very difficult to estimate TTE. For example, take the Non-LinearInMeans outcome specification that we introduce as a generalization of the LinearInMeans (studied in the literature, e.g., Leung 2022). To the best of our knowledge, the proposed HO-CMP framework is the first method to efficiently utilize data for estimating TTE in this setting, and PolyFit and the estimator of [SB23] which is similar to HO-CMP-i being also the best benchmarks one could find (no other estimator to our knowledge can be applied here).
3. We thank Reviewer p2A8 for the comments. The expected convergence rate is O(1/\sqrt{N}), as discussed in [SB23]. We agree that an empirical demonstration of the rate would be a valuable addition to the paper that we plan to include in the revision.
We agree with the reviewer that the term “higher-order” might be confusing. It is intended to highlight that our function approximation of the dynamics incorporates additional terms. We will address all minor comments to clarify any potential ambiguities.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their clarifications. I will retain my score. | null | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
The Ladder in Chaos: Improving Policy Learning by Harnessing the Parameter Evolving Path in A Low-dimensional Space | Accept (poster) | Summary: This paper first introduce some common phenomena for TD3 and RAD agents, and then introduce a novel deep reinforcement learning algorithm which perform a novel temporal SVD along the policy learning path called Policy Path Trimming and Boosting (PPTB). This algorithm offers us an angle to view how policy evolves in a lower-dimensional space and how to utilizes it in reinforcement learning part. The paper is well written and novel.
Strengths: 1.The paper is well-written.
2.The author methodically analyzes the experimental results using specific indicators (Accumulated Parameter Change, Final Parameter Change, Parameter Update Detour Ratio), making the paper very comprehensible.
Weaknesses: 1.Although most of the figures in the paper are very clear, some are unclear and hard to understand, particularly the last two figures in Figure 2(b).
2.Some of the reported improvements in AUC appear to be incorrect. For instance, (189 ± 2 (103.33%)) and (148 ± 15 (81.48%)) seem to contain errors.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1.Could the authors provide the learning curves of TD3 and TD3-PPTB? From my understanding, TD3-PPTB should only accelerate the training, I don't think TD3-PPTB would have better performance than TD3. I believe they should achieve the same converged result eventually.
2.The author only provided results for 100k and 500k steps in the RAD results shown in Table 2. What about the results for 1000k or 2000k steps?
3.Is the parameter P_b fixed in each environment? Can you talk about how to determine the P_b in your experiment? If is not set properly, will it cause excessive oscillation during the training process? Additionally, in the experiment details, a 2000 or 1e5-dimensional neural network is too large for Mujoco training. Most papers indicate that commonly used model dimensions are 64 or 128.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: 1. This paper does not explain from a theoretical perspective why this approach would accelerate convergence.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > Q1: The learning curves of TD3 and TD3-PPTB
We provide the missing learning curves in **Figure 9** (for RAD and RAD-PPTB) and **Figure 10** (for TD3 and TD3-PPTB) of the one-page pdf uploaded.
We hypothesize that the **higher convergence performance** in Ant and Hopper can be explained by **PPTB’s effect in preventing the plasticity loss** in the minor temporal SVD directions to some degree.
In fact, the policy path trimming (PPT) method proposed in our paper can be viewed as a special way to do network resetting to alleviate the primacy bias as in [Nikishin et al., 2022].
Concretely, our method differs from the vanilla network resetting method in [Nikishin et al., 2022] on two points: (1) we do reset in a transformed subspace (with temporal SVD) rather than the original space; (2) we only reset the parameter update in the minor directions and maintain the learned information in the major directions.
From the perspective of plasticity, our method periodically rolls back the plasticity of the network in the directions that are orthogonal to the ones that represent the effective knowledge learned so far.
> Q2: “The author only provided results for 100k and 500k steps in the RAD results shown in Table 2. What about the results for 1000k or 2000k steps?”
We follow the evaluation setting as that in the original paper of RAD (see Table 1 in the original paper).
> Q3: Some of the reported improvements in AUC appear to be incorrect. For instance, (189 ± 2 (103.33%)) and (148 ± 15 (81.48%)) seem to contain errors.
As described in Line 277-279, for each task, we report the comparative improvement by normalizing with a random-agent baseline as 0 and the DRL base algorithm (i.e., TD3 or RAD) as 1.
This resembles the convention used for Atari tasks, where the random agent is taken as 0 and the human performance is taken as 1.
> Q4: “in the experiment details, a 2000 or 1e5-dimensional neural network is too large for Mujoco training. Most papers indicate that commonly used model dimensions are 64 or 128”
For the dimension of each hidden layer, we follow the convention and use 256 for TD3 and RAD. Thus **1e5 is the scale of the total number of network parameters** (and 2000 is the window size of the historical policies maintained), not the dimension of each hidden layer.
> Q5: Can you talk about how to determine the P_b in your experiment? If is not set properly, will it cause excessive oscillation during the training process?
In our experiments, we found that choosing different $P_b$ may have a difference in the improvement achieved upon the baseline algorithm, but it is quite safe to choose among a relatively large range without incurring a clear performance drop. Choosing a large $P_b$ should be safer as it makes less impact on the parameter update in principle according to the temporal SVD.
This is also supported by our empirical results for the SVD parameter reconstruction shown in Table 3 to Table 8 in the appendix.
---
Rebuttal 2:
Comment: Dear Reviewer,
We hope that you've had a chance to read our responses and clarification. As the end of the discussion period is approaching, we would greatly appreciate it if you could confirm that our updates have addressed your concerns.
---
Rebuttal Comment 2.1:
Comment: Thank you for your response and the additional experiments; they make sense. I’m happy to improve my score.
---
Reply to Comment 2.1.1:
Comment: Thank you for your valuable suggestions and we will make sure to take your suggestions to improve our paper.
We sincerely appreciate the time and effort you devoted to reviewing our work! Please let us know if there are any further questions or discussions. | Summary: This paper investigates the evolving path of policy network parameters in deep reinforcement learning (DRL). The author conducts experiments on multiple tasks in MuJoCo using TD3 and on multiple tasks in DMC using RAD. The findings reveal significant discrepancies in the amount of change among policy parameters and severe detours in policy parameter updates. To address this, the author employs Temporal SVD to decompose the evolving path of policy parameters. Despite the large number of parameters, the learning dynamics of the policy network are found to be concentrated in a few primary directions, forming a low-dimensional space.
Based on these insights, the author proposed a new method called Policy Path Trimming and Boosting (PPTB). This method improves the performance of DRL algorithms by canceling updates in secondary parameter directions and boosting progress in primary directions. Experiments conducted on TD3 and RAD agents in MuJoCo and DMC environments demonstrate that the PPTB method significantly outperformed the original methods in terms of score and AUC evaluation metrics, thereby substantially enhancing the performance of the DRL algorithm.
The author investigates the dynamics of policy network parameters from a novel perspective of Temporal SVD, and proposes an innovative method PPTB to enhance the performance of the DRL algorithm. The writing style of this paper is clear and easy to understand, with rigorous logic, and it is of great significance to algorithm optimization in the field of DRL.
Strengths: The author investigates the dynamics of policy network parameters from a novel perspective of Temporal SVD, and proposes an innovative method PPTB to enhance the performance of the DRL algorithm. The writing style of this paper is clear and easy to understand, with rigorous logic, and it is of great significance to algorithm optimization in the field of DRL.
Weaknesses: The article does not clearly explain why only TD3 and RAD algorithms were selected. Wouldn't other algorithms have similar phenomena when updating strategies?
The PPTB experiment was only conducted on TD3 and RAD, and did not reveal the universality of PPTB on other DRL algorithms.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Wouldn't other DRL algorithms have similar phenomena when updating strategies?
2. Does PPTB work with DRL methods other than TD3 and RAD?
3. Can the performance of PPTB be tested in a deeper network?
4. Can the performance of PPTB algorithm be tested in more experimental environments or different tasks? Are there more evaluation metrics besides score and AUC?
Confidence: 2
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The author should enrich the experimental details and experimental content to demonstrate the versatility of the PPTB algorithm, or test the performance of the algorithm in more practical tasks. The article should appropriately discuss the limitations of the proposed algorithm, or provide some prospects for future improvements.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > Q1: Why TD3 and RAD are selected?
The motivation for this work starts from the investigation of the dynamics of policy network parameters. Thus, we choose TD3 and RAD for the following reasons:
- They are popular deep AC methods (note RAD is SAC-based), where an explicit policy network is trained. This ruled out the value-based RL methods.
- TD3 and SAC are off-policy methods. They are usually more sample-efficient while less stable than on-policy algorithms like PPO.
- RAD takes visual observations as input. Thus, our choices cover both proprioceptive inputs and image-based inputs.
> Q2: Wouldn't other algorithms have similar phenomena when updating strategies?
As suggested by Reviewer zpVk, in Figure 11 of the one-page pdf, we can observe similar phenomena for Behavior Cloning with Adam or SGD optimizers, but with different concrete patterns.
Besides, we observed similar phenomena for DDPG (omitted due to space limitation).
> Q3: Other suggestions on the experiment
We appreciate the reviewer’s valuable suggestions on the experiment.
We will present the results for this if we are able to finish this before the discussion stage ends.
---
Rebuttal 2:
Comment: Dear Reviewer,
We hope that you've had a chance to read our responses and clarification. As the end of the discussion period is approaching, we would greatly appreciate it if you could confirm that our updates have addressed your concerns.
---
Rebuttal 3:
Comment: Thank you for your response. I've read the other reviews and the rebuttal. I’m keeping my initial score. | Summary: Off-policy actor-critic Deep RL has unstable and seemingly oscillatory learning dynamics, which are poorly understood. This paper looks closely at the trajectories taken by policy networks during training. An SVD analysis, performed over sequences of policy parameter snapshots, reveals near-monotonic parameter evolution along directions corresponding to the few dominant singular values, and--- indeed---oscillatory behaviour along the minor directions.
The authors propose an intuitive and mathematically sound way to remedy this pathology by only permitting the parameters to move in subspace spanned by a small number of the top directions, while retaining the policy performance. Additionally, by taking larger steps in the first two major directions, they are able to train more performant policies.
Strengths: **Significance**
The paper addresses an important problem. Deep RL researchers have long observed poor training dynamics in off-policy learning. These include divergence, performance collapses, and oscillatory behaviour (i.e. policies forgetting what was learnt, then recovering). The reasons behind these are poorly understood. While the paper does not quite advance our understanding of the problem, it does potentially identify some mechanistic signatures of the problem (oscillatory weight dynamics) and proposes a treatment for this symptom (suppressing movement in oscillation-afflicted subspaces). I think the proposed PPTB fix is unlikely to be a complete solution, mainly because the critic's behaviour is not yet studied, but it can be a stepping stone towards better behaved learning dynamics.
**Clarity**
The paper is very clearly written, and I enjoyed reading it. Policy churn and oscillatory dynamics potentially stem from the combined actor-critic training dynamics, but the authors focus exclusively on the policy here. This is good, as it has kept the study focused and revealed interesting phenomena.
**Originality**
I am unaware of prior work which empirically inspects the policy learning dynamics by actually plotting out the weights' evolution, as the authors have done.
**Quality**
The idea of using temporal SVD upon the policy weights is quite sensible, and a treatment for the problem immediately pops out of the same tool; this fix also appears to be easily implementable in code, and seems to grant substantial performance boosts.
Weaknesses: 1. **Limited analysis.** The analysis is focused on TD3 and RAD (based on SAC here, which itself is quite close to TD3). I think this is insufficient. Seasoned RL practitioners will note that DDPG exhibits markedly more policy oscillations/collapses/recoveries than TD3: the introduction of the Clipped Double Q trick already substantially mitigates those effects. In this empirical study, I worry that by not including DDPG --- the simplest possible baseline --- the authors are not observing the problematic dynamics in their full glory, and perhaps not testing the full potential of PPTB. Right now, there is a risk that your observed phenomena are quirks of TD3-lineage methods.
2. **Related work.** The observation that gradient descent moves the parameters along a few dominant directions, chiefly in a low-dimensional subspace, is not new. There is a body of deep learning literature around this phenomenon, which the authors don't cite right now. Here is one such paper, and there are more:
- "Gradient descent happens in a tiny subspace" G Gur-Ari, DA Roberts, E Dyer
3. **Potentially limited significance.** Following from the above point: the existence of dominant directions is unsurprising. The existence of oscillatory detours in the minor directions *could* be a novel finding regarding actor-critic RL, but we don't know that yet. To strengthen the paper, here's a test: do you also observe the same harmonics in basic supervised learning? (e.g. when training the actor network with an MSE behaviour cloning loss). I think this is a particularly important thing to investigate, because I'm concerned that it could change the story of your paper.
4. **Lack of training curve comparisons.** For a method that expressly tries to curb oscillations, it is important that we see how the actual training curves look, instead of just a table with scores.
5. **Limited evaluations.** 6 is not enough; it's common to report a minimum of 10 independent trials. Also: you have `Humanoid-v4` experiments in the paper, but don't report scores with PPTB on that env.
Technical Quality: 4
Clarity: 3
Questions for Authors: - Line `434`: you say that you choose the boosting coefficient $p_b$ from a set of values. Do you mean that you search from this set for a good hyperpameter value, or do you randomly sample from this set at each training iteration?
Confidence: 4
Soundness: 4
Presentation: 3
Contribution: 3
Limitations: The limitations section is well written, but one thing that isn't mentioned is that the method can struggle to scale up to larger network sizes (beyond the smallish networks used in this paper).
Overall, I find this paper exciting, and I'm willing to substantially improve my rating if the authors address all the listed weaknesses (most crucially, number `3`).
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Title: Missing Rebuttal
Comment: > Q1: The additional investigation for Behavior Cloning policy
We appreciate the reviewer for pointing out this insightful and inspiring point.
We provide additional results in Figure 11 of the one-page pdf to present additional investigation for **Behavior Cloning** in D4RL halfcheetah-medium-replay **with Adam and SGD optimizers**.
As noted by the reviewer, we found that BC (Adam), BC (SGD) and the RL cases reported in our paper show different patterns in terms of temporal SVD evolvement. **The key observation** is that **BC policies do not turn out to show the harmonics in the major temporal SVD directions** (the leftmost plot in Figure 11b and 11d). In contrast, we can observe that **the amplitude of the oscillations seems to decrease throughout training**.
We deem that this observation is very interesting which could lead to a deeper understanding of the learning dynamics of BC and RL agents.
For the parameter update analysis, compared with the RL cases reported in our paper, the parameter update amount is more Gaussian distribution like for BC (Adam). This indicates the difference in policy parameter update dynamics between BC (i.e., supervised learning) and RL.
Somewhat surprisingly, the asymmetry of parameter update amount turns out to be more severe in BC (SGD). This is a bit counterintuitive because Adam exhibits less asymmetry than SGD.
We are more than willing to discuss this point with the reviewer in the discussion stage.
> Q2: The analysis for DDPG
We appreciate the reviewer for pointing out this.
Actually, we also did the parameter update analysis and the temporal SVD analysis for DDPG. We found **the empirical results are quite similar and the patterns in parameter update and temporal SVD directions are almost the same as those of TD3**. Therefore, the plots are omitted and not included in the one-page pdf.
This indicates that the phenomena revealed in this work are not closely related to the overestimation bias things (which is the focus of clipped double Q learning in TD3, SAC, RAD).
> Q3: Lack of training curves
We provide the missing learning curves in **Figure 9** (for RAD and RAD-PPTB) and **Figure 10** (for TD3 and TD3-PPTB) of the one-page pdf uploaded.
> Q4: “Line 434: you say that you choose the boosting coefficient $𝑝_𝑏$ from a set of values. Do you mean that you search from this set for a good hyperpameter value, or do you randomly sample from this set at each training iteration?”
The complete process is that we first pre-designate a set of candidates of $p_b$ according to our empirical results for the temporal SVD parameter reconstruction (as shown by Table 3-8 in the appendix). Then we narrow the set and search from this set for a good hyperparameter.
Moreover, In our experiments, we found that choosing different $P_b$ may have a difference in the improvement achieved upon the baseline algorithm, but it is quite safe to choose among a relatively large range without incurring a clear performance drop. Choosing a large $P_b$ should be safer as it makes less impact on the parameter update in principle according to the temporal SVD. This is also supported by our empirical results for the SVD parameter reconstruction.
> Q5: The related work in deep learning literature
We appreciate the reviewer for pointing out the reference paper. We have added it to our draft and will add more in our related work section to strengthen the discussion.
> Q6: Other suggestions on the experiment
We appreciate the reviewer’s valuable suggestions on the experiment.
We will present the results for this if we are able to finish this before the discussion stage ends.
---
Rebuttal 2:
Title: Follow-up questions
Comment: Thank you for the updated results and rebuttal!
> We provide the missing learning curves in Figure 9 (for RAD and RAD-PPTB) and Figure 10 (for TD3 and TD3-PPTB) of the one-page pdf uploaded.
The PPTB versions certainly look better. I encourage you to add them to the appendix in a future version of the paper.
> As noted by the reviewer, we found that BC (Adam), BC (SGD) and the RL cases reported in our paper show different patterns in terms of temporal SVD evolvement. The key observation is that BC policies **do not** turn out to show the harmonics in the major temporal SVD directions (the leftmost plot in Figure 11b and 11d). In contrast, we can observe that the amplitude of the oscillations seems to decrease throughout training.
(^ **emphasis** mine)
Upon looking at the newly provided plots (Fig 11b + 11d in the 1-page PDF), I don't see how one can infer that.
Here's what I see:
1. The BC-training dynamics **do** exhibit oscillations just like the RL-training dynamics in the main paper (e.g. Fig 7).
2. The oscillation amplitudes for the major components do decrease over BC training, just as they **also** decrease during RL training.
Unless I'm very mistaken in my interpretation (happy to be proven wrong!) your experiments reveal that the oscillatory patterns in the top SVD directions are **not specific to RL**, but occur in vanilla supervised deep learning too (irrespective of Adam or plain SGD). And these experiments should go into the paper.
This also suggests that maybe PPTB need not be an RL-specific heuristic but could also help supervised deep learning (but that will require more experiments, so the paper raises more questions than it answers...)
I think that these findings necessitate substantial edits to the paper. E.g.
- Emphasising early on that the empirical observations are a broader deep learning phenomenon, not RL-specific
- Adjusting/removing claims that make it sound like the oscillations stem from RL pathologies (e.g. noisy policy gradients)
- Linking the paper more strongly with related work on low-dim subspace training
For this key reason, I'm maintaining my score at the current level.
I still believe your approach is interesting and original, and I'd encourage you to try and publish a future version with an updated story!
---
A side comment:
If you look closely at Fig 7 column 2 in the paper (the major-component u[k] curves for all 4 MuJoCo envs) you'll find that the oscillatory curves are **nearly identical** for `Hopper-v4`, `Walker2d-v4`, and `Ant-v4` but not `HalfCheetah-v4`. I don't have any intuition for this, but it seems surprising.
As this is an empirical investigation, the authors should think about what that means, how/why that might happen, and discuss that in the paper.
---
Rebuttal Comment 2.1:
Title: Response to the Following-up Questions
Comment: > About the expression “The key observation is that BC policies **do not** turn out to show the harmonics in the major temporal SVD directions”
We agree that BC training dynamics do exhibit **oscillations** (it is apparent).
What we meant to express is:
- In **RL** cases (Figure 7, 8), the amplitude of the oscillation decreases less (relatively slightly or slowly). Thus, the pattern is **more like “harmonics”**.
- In **BC** cases (Figure 11), the amplitude of the oscillation decreases more (relatively faster, especially at the beginning of the learning. Thus, the pattern is **more like “wavelets”**. This is why we said “do not turn out to show the harmonics”.
Sorry for using the confusing expression and not making this point clear in our rebuttal.
Moreover, notice the parameter analysis in Figure 11a, we can observe that **BC (Adam) does not have a significant proportion of the parameters that have a minor update amount**. This is different from the severe asymmetry observed for DRL (Adam) in the first column of Figure 4 and Figure 5.
The observations are also different for BC (Adam) and BC (SGD), by comparing Figure 11a v.s., 11c, Figure 11b v.s., 11d, especially the first column.
Therefore, these results indicate that **the pattern of the oscillations is different in different learning problems**.
> “your experiments reveal that the oscillatory patterns in the top SVD directions are not specific to RL, but occur in vanilla supervised deep learning too (irrespective of Adam or plain SGD)”
We agree that such oscillations in the view of temporal SVD are likely to exist in more learning problems beyond RL.
What we would like to mention is that, **the pattern of the oscillations is different in different learning problems**. Intuitively, the pattern captures the information/features of the learning dynamics of the model, which is in principle determined by the factors like learning paradigm (e.g., RL v.s., SL), and optimization (e.g., Adam v.s., SGD).
We think that it could not be that easy to find **a unified explanation or theory to interpret all the patterns in different problems**.
In this work, we start from and focus more on online RL and leave an omniscient explanation/study in the future. We will include our new results in our story to provide more insights as suggested.
---
Reply to Comment 2.1.1:
Title: More results for "the oscillatory curves are nearly identical for Hopper-v4, Walker2d-v4, and Ant-v4 but not HalfCheetah-v4 (Fig 7 column 2)"
Comment: > If you look closely at Fig 7 column 2 in the paper (the major-component u[k] curves for all 4 MuJoCo envs) you'll find that the oscillatory curves are nearly identical for Hopper-v4, Walker2d-v4, and Ant-v4 but not HalfCheetah-v4. I don't have any intuition for this, but it seems surprising.
We appreciate the reviewer for pointing out this. We hypothesize that this similarity among the 4 MuJoCo tasks (concretely, **the "phases" of different oscillation curves corresponding to different SVD direction indices**) stems from **the impact of initial network parameters**, as we use the same initialization method of network parameters across different tasks (this is also a convention in practice).
To verify this, we **subtract the initial network parameters from each policy network we collected** during the learning process (note that in each task, for each seed, the policies along the update path share the same initial parameters) and then do the same Temporal SVD analysis for these policy networks. By this means, we get rid of the impact of the initial network parameters on our Temporal SVD analysis.
Now we aim to see **whether the oscillation curves for different tasks look clearly different or still look "nearly identical"** to verify our hypothetical reason of the impact of the initial network parameters.
As expected by our hypothesis, we found that, **the "phases" of different oscillation curves are no longer similar, or in other words, the oscillation curves differ clearly among different tasks.**
Meanwhile, we observed the same phenomenon (i.e., Phenomenon 2.2) that we found in our previously reported results: the major Temporal SVD directions oscillate less and the minor directions oscillate more.
As we are not allowed to upload these plots in this stage (or even with an anonymous link), we will add these additional results and discussion in our revision.
We believe that these results and discussions address the concerns raised in the review, but please let us know if there are any further issues to address.
---
Rebuttal 3:
Title: We appreciate the reviewer's constructive and inspiring comments. And some more discussion
Comment: We greatly appreciate the time and effort devoted by the reviewer to reviewing our work and participating in the discussion.
The comments and discussions provided by the reviewer are really valuable and inspiring, and we believe these comments are leading our work to a higher quality and level.
We will carefully take the suggestions and improve our organization and presentation to include the additional results done in the rebuttal and discussion stages, i.e., the parameter and **temporal SVD analysis for DDPG, BC (Adam and SGD), as well as the additional evaluation of PPTB for DoubleDQN in MinAtar** (as suggested by Reviewer dZpR).
Concretely, we plan to start by presenting the learning process of policy networks from a relatively general DL angle to cover aspects like optimizers (e.g., Adam, SGD) and learning paradigms (i.e., RL, SL). Then we plan to discuss these aspects separately by presenting the empirical observations regarding the network parameter analysis and the temporal SVD analysis, with a focus on online RL policy learning as presented in our draft. Afterward, we discuss the differences in the empirical observations in different settings, by establishing the connection between the distinct features of different learning settings and the difference in the patterns of the distribution of network parameter update amount or the oscillations of temporal SVD directions.
Here are **some results and conclusions we’ve already observed**:
1. by comparing the results of TD3 and DDPG (as mentioned in Q2 of our rebuttal), we can find that the overestimation bias (i.e., the major difference between TD3 and DDPG) does not seem to be a factor that causes the oscillations.
2. by comparing TD3 and BC (Adam), we can find that RL and SL lead different distributions of network parameter update amount (as Figure 4a v.s., Figure 11a) and the “harmonic v.s., wavelet” oscillations (as Figure 7a v.s., Figure 11b).
3. by comparing BC (Adam) and BC (SGD) as shown in Figure 11 a,b v.s., Figure c,d, we can find that the momentum mechanism used in Adam leads to a relatively more evenly distributed network parameter update amount, while SGD has a large number of minor updates (also reflected by the flat $u[1]$ curve in Figure 11d, the first column) and a long tail in distribution.
These observations are the empirical support for our previous response “the pattern of the oscillations is different in different learning problems”. And with the second observation outlined above, we might not be able to fully agree with “However, the RL and SL training dynamics, empirically, did not show distinguishably different oscillatory patterns” in the comment of the reviewer. We are doing more BC results to investigate this point further. | Summary: This paper investigates the evolution of parameters over time during policy optimization with TD3 and RAD. By analysing the SVD of a matrix containing parameters over time, the authors find that there are a few directions in which the parameters move consistently and many with more oscillations. Using this insight, an algorithm is proposed which boosts updates in the top estimated directions and reduces updates in the uninmportant ones. Evaluations on standard deep RL benchmark tasks demonstrate the utility of the approach.
Strengths: This paper uncovers some nice insights into the training process of deep RL agents.
As far as I know, I have not seen the SVD used to study parameter evolution over time, and I find it is an interesting application of that tool. The proposed algorithm is conceptually simple and easy to implement, making it easy to add to a variety of existing algorithms.
In terms of impact, the identified phenomena may extend past RL and could be true of deep neural network training more generally, potentially giving widespread impact.
The paper is well-organized, with the different sections flowing nicely into each other. Generally, the paper was easy to follow and experiments chosen appropriately to make the intended arugments.
Weaknesses: My main concern is about the evaluation of the algorithms.
For example, reporting the max return in the evaluation of algorithm or using the standard deviation across runs, whereas the standard error or boostrapped confidence intervals would be more appropriate. (see Questions)
Also, the following design choice is confusing to me:
- Fig. 1 caption. "Only upper 80% values according to $\nabla^{apc}_j$ are taken to plot...for meaningful analysis"
Could you elaborate why this decision was made? Does it have to do with many of the values being close to zero?
The improvement in the performance is generally fairly modest. I think this is fine given that main contribution to me is identifying behaviours in the parameter evolution. I think expanding a bit more on the analysis could be interesting.
In terms of clarity, some details could be expanded upon more in the main text. See Questions section.
Technical Quality: 2
Clarity: 4
Questions for Authors: _Clarification questions_
- Line 157: What is $\alpha_k$? Is it Singular Value Information Amount?
- Fig 2 b) 4th and 5th figure from left. Interesting findings. So basically, the detour ratio is smaller for the
Again, I wonder if that has more to do with noise or curvature.
Perhaps there is less noise in these directions,
- In the 3rd fig from the left, it's hard to see much since the black curve masks everything else. Consider using different colors or some transparency. It would be nice to see how the paths get increasingly noisy.
- Policy Path Trimming: To clarify the algorithm, does it project the current parameters into the space identified by the top singular vectors?
The description of the algorithm could be improved in Sec. 4.1. where it is introduced. While the intuition is described, the exact mechanism that is implemented is not explained.
- Similarly, in section 4.2, the Policy Path Boosting could be described a bit more clearly. Eq. 2 in particular is a bit confusing since
$\hat{u}_{n,*}$ is updated but then only
$\hat{u}_{n,i}$ $(i=1,2)$ are actually used.
The phrase in line 241 "PPB modifies $\theta_n$ by increasing $u_{n,1}, u_{n,2}$ along the temporal direction..." sounds overly complicated. Perhaps rephrasing it to something like "PPB moves the parameter further in the direction of previous updates along the first two main directions $u_{n,1}$ and $u_{n,2}$" could be simpler.
_Suggestions and broader questions_
- By constraining the parameter evolution path to focus on the previous main directions, would the effect of "primacy bias" or related phenomena be even stronger? Could we be losing out on performance due to prematurely committing to certain update directions?
- Line 270. The evaluation metric "SCORE" should no longer be used since the maximum over runs introduces overestimation bias and leads to less reliable estimatse. See [1] for better evaluation practices and [2] for arguments against using the max.
- I wonder if the detours are mainly due to noise or curvature of the objective. One way to test this would be to increase or decrease the minibatch size, which can control the variance of the updates. Then, by inspecting the effect on the detour ratio or cumulative parameter movement, we could guess the relative impact of noise and curvature.
- Fig.1 is a CDF plot. I think a histogram or a box plot might be easier to interpret than CDF plots since you need to look at differences in a CDF plot to identify where most of the probability mass is.
- An ablation study for the two components of the algorithm would be a valuable addition. It is not clear if both pieces are necessary right now or how important they are.
- I would also be curious to know what would happen if we took the estimated space
- Here is a paper that could be interested to read [3] where the authors show that, even if you constrain neural network parameters to a random subspace, as long as the dimension of that subspace is not too small, you can recover the same performance as the original network.
The idea of Policy Path Trimming could be interpreted as a more intelligent approach which estimates the constrained subspace instead of choosing a random one.
- Another line of research (e.g. see [4]) has observed that the Hessian contains only a few large eigenvalues in neural network training, which may be related to the ideas discussed in the paper since there would be some interplay between the curvature of the loss surface and directions of updates over time.
- In the matrix of parameters over time, consider using $t \in {1,...,T}$ to index the rows instead of $n$ so it's a little easier for the reader to remember which dimension corresponds to what. Alternatively, if $t$ is reserved for environment timesptes, $\tau$ could be used as a substitute.
[1] "Deep Reinforcement Learning at the Edge of the Statistical Precipice" Agarwal et al.
[2] "Deep Reinforcement Learning that Matters" Henderson et al.
[3] "Measuring the Intrinsic Dimension of Objective Landscapes" Li et al.
[4] "An Investigation into Neural Net Optimization via Hessian Eigenvalue Density" Ghorbani et al.
Confidence: 4
Soundness: 2
Presentation: 4
Contribution: 3
Limitations: These are discussed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > Q1: the evaluation of the algorithms. For example, reporting the max return in the evaluation of algorithm or using the standard deviation across runs, whereas the standard error or boostrapped confidence intervals would be more appropriate
Technically, we are not reporting the “max return” in our evaluation. We report the **“maximum of the mean return”**. To be concrete, this is obtained by first (1) computing the curve of mean return evaluation across multiple runs, and then (2) taking the maximum of the mean evaluation curve.
We **follow the evaluation scheme presented in TD3 paper** (see “Max Average Return over 10 trials” in the caption of Table 1 in TD3 paper).
The rationality of this evaluation scheme is that, for online learning, it is possible to store checkpoints throughout learning. For the opposite case, in offline RL, the convention is to report the mean final score.
In Line 279, we wrote, “We report the means and standard deviation errors across six independent trials”. We meant to say “standard errors” rather than “standard deviation errors”. This is a mistake in writing. We have amended it in our revision.
We appreciate the reviewer for the suggestion of using other more reliable evaluation metrics. We will take the suggestion in our revision.
> Q2: “I think expanding a bit more on the analysis could be interesting”
As suggested by Reviewer zpVk, we provide additional results in Figure 11 of the one-page pdf to present additional investigation for **Behavior Cloning** in D4RL halfcheetah-medium-replay **with Adam and SGD optimizers**.
Compared with the RL cases reported in our paper, the parameter update amount is more Gaussian-disribution like for BC (Adam). This indicates the difference in policy parameter update dynamics between BC (i.e., supervised learning) and RL.
Somewhat surprisingly, the asymmetry of parameter update amount turns out to be more severe in BC (SGD). This is a bit counterintuitive because Adam exhibits less asymmetry than SGD.
Moreover, we found that BC (Adam), BC (SGD) and the RL cases reported in our paper show different patterns in terms of temporal SVD evolvement.
We believe that these results are interesting to potential audiences of this paper and worth a more in-depth discussion.
> Q3: “By constraining the parameter evolution path to focus on the previous main directions, would the effect of primacy bias or related phenomena be even stronger? Could we be losing out on performance due to prematurely committing to certain update directions?”
In fact, the policy path trimming (PPT) method proposed in our paper can be viewed as a special way to do network resetting to alleviate the primacy bias as in [Nikishin et al., 2022].
Concretely, our method differs from the vanilla network resetting method in [Nikishin et al., 2022] on two points: (1) we do reset in a transformed subspace (with temporal SVD) rather than the original space; (2) we only reset the parameter update in the minor directions and maintain the learned information in the major directions.
From the perspective of plasticity, **our method periodically rolls back the plasticity of the network in the directions that are orthogonal to the ones that represent the effective knowledge learned so far**.
We appreciate the reviewer’s insightful comments and we believe that more study can be done in the future by taking into consideration both policy update subspace and the plasticity loss problem.
> Q4: "Only upper 80% values according to $\Delta_{j}^{\text{apc}}$ are taken to plot...for meaningful analysis" Could you elaborate why this decision was made? Does it have to do with many of the values being close to zero?
This is because many parameters have a very minor value (i.e., very close to zero) of accumulated parameter change (as revealed by the parameter update asymmetry phenomenon).
We have clarified this in our draft to eliminate the confusion.
> Q5: The clarification questions
- [Line 157: What is $\alpha_k$? Is it Singular Value Information Amount?] $\alpha$ is the Singular Value Information Amount for a dimensionality number $k$, which is **defined in Line 155**.
- [Fig 2 b) 4th and 5th figure from left. Interesting findings. So basically, the detour ratio is smaller for the Again, I wonder if that has more to do with noise or curvature. Perhaps there is less noise in these directions] Yes, the detour ratio is smaller for the major directions. We consider that it is closely related to the curvature of the landscape of policy objective function.
- [In the 3rd fig from the left, it's hard to see much since the black curve masks everything else. Consider using different colors or some transparency. It would be nice to see how the paths get increasingly noisy.] We appreciate the reviewer’s suggestion. We will replace the plot with the frequency information by Fourier transformation for a better presentation.
- [More description for Policy Path Trimming and Policy Path Boosting] The conversion from the original parameter space to the subspace is done by performing temporal SVD. The corresponding left singular vectors are taken as the new coordinates in the subspace. This is described between Line 145-152. We have taken the writing suggestions provided by the reviewer and added more detailed descriptions in Section 4.1 and 4.2 to eliminate the confusion.
> Q6: Other suggestions
We sincerely appreciate the reviewer for providing insightful comments along with very useful reference papers.
Due to the time limit of the rebuttal stage, we may not be able to respond to each point. However, we are more than willing to discuss more in the discussion stage.
---
Rebuttal 2:
Comment: Dear Reviewer,
We hope that you've had a chance to read our responses and clarification. As the end of the discussion period is approaching, we would greatly appreciate it if you could confirm that our updates have addressed your concerns.
---
Rebuttal Comment 2.1:
Comment: Thank you for the clarifications.
A quick follow-up question:
How is the PBTT similar to resetting among "minor" update directions?
If I understand correctly, it only restricts updates in those directions but does not involve any resetting of weights.
---
Reply to Comment 2.1.1:
Title: Response to the similarity between PPTB and resetting of weights
Comment: We appreciate the reviewer for further feedback!
> How is the PBTT similar to resetting among "minor" update directions? If I understand correctly, it only restricts updates in those directions but does not involve any resetting of weights.
PPTB's resetting effect in the minor update directions is achieved by (1) obtaining the transformed parameter space via performing temporal SVD with the historical policies collected in a sliding window, (2) and then (re-)setting (or dropping) the left singular vector value (i.e., $u$) to zero.
More concretely, this differs from the resetting method in [Nikishin et al., 2022] at two points:
1. **[Resetting to random initialization v.s., Resetting to Zero in the transformed parameter space]** The resetting method in [Nikishin et al., 2022] resets the network parameters to a set of randomly (re-)initialized parameters. In contrast, PPTB resets the left singular vector value (i.e., $u$) to zero for the minor temporal SVD directions, i.e., resetting the parameters to zero in the transformed parameter space. One thing to note is that resetting the left singular vector value (i.e., $u$) to zero does not mean that the network parameters (in the original parameter space) are necessarily zero, because the basis of space is different. A more in-depth analysis on the correlation between them is worth further study in the future.
2. **[Global resetting v.s., Local resetting]** The resetting method in [Nikishin et al., 2022] is global, as the network parameters are reset to a set of randomly (re-)initialized parameters. Differently, we do resetting according to the temporal SVD parameter space based on a sliding window that includes recent historical policies (thus being local). In this sense, this is similar to Shrink-and-Perturb [Ash & Adams, 2020], which can be viewed as a soft version of the resetting method, that shrinks the network parameters and adds random parameters with a coefficient.
Indeed, we agree with the reviewer that this effect can be understood as a restriction in these directions. We appreciate the reviewer's inspiring comments. We will add these discussions in our revision for a more comprehensive understanding.
Please let us know if there are any remaining questions or concerns that we can address to improve your assessment. We are willing to discuss more with the reviewer to improve our work further. | Rebuttal 1:
Rebuttal: We appreciate all the reviewers’ careful review and valuable comments. Please refer to the individual rebuttals for our responses.
In the one-page pdf uploaded, we provide additional results:
- **[(Suggested by Reviewer zpVk) Additional empirical investigation on Behavior Cloning in D4RL halfcheetah-medium-replay with Adam and SGD optimizers]** Figure 11 presents the results for an extended investigation on Supervised Learning (BC) and RL, momentum-based optimizer (Adam) and vanilla optimizer (SGD).
- **[(To Reviewer B4U6) The missing learning curves]** Please refer to Figure 9 and Figure 10.
With these additional discussions and experimental results, we would like to emphasize that **our empirical study** and **the method for improving DRL agents in the policy subspace constructed with temporal SVD** have not been studied before to the best of our knowledge.
Finally, we sincerely hope that our response can address the questions and concerns raised by the reviewers. We also hope that the reviewers can re-evaluate the value of our work based on the responses and the additional results provided in the one-page pdf.
We are also more than willing to address any further questions or concerns during the discussion stage.
Pdf: /pdf/fccebf395f276b9c07aaca0bf9fa4a79744ea3db.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | Summary: The authors examine the trajectories of policy learning in continuous control reinforcement learning tasks.
They begin by measuring how directly parameters go to their destination and observe large detours and differing update behaviour for different layers. They then examine the singular value decomposition of the different training checkpoints and observe some strong common update trends.
They propose Policy Path Trimming and Boosting (PPTB), where policy boosting is boosting the gradients towards the strongest singular values, and trimming trims the updates from the smaller singular values.
Strengths: * The paper describes their method and analysis well. I generally found it easy to read and understand what points they were trying to convey.
* The proposed method does seem to improve performance on the tasks presented.
Weaknesses: * The environments and methods investigated, given the authors resutls are purely empirical, are not diverse enough to draw any real conclusions. The authors focus on continuous control environments without considering discrete environments such as Atari, or more diverse network architectures such as recurrent networks or transformers, or different methods such as PPO or DQN. These results hold only for MLPs or CNNs trained on continuous control tasks, which just isn't a convincing enough setting to warrant acceptance.
* The proposed method does not seem particularly practical. Computing SVD is both time intensive and requires storing a wide range of previous parameters. It therefore requires a lot more compute, which is why such methods are typically not used. Compared to approximations such as momentum-based optimisation [1], which are also aimed at achieving more uniform convergence, this time among eigenvalues of the data matrix. Although their method improves performance, they have not convincingly demonstrated that the compute couldn't be better used elsewhere, for example by training a bigger network or by sweeping hyperparameters more effectively.
[1] Goh, "Why Momentum Really Works", Distill, 2017. http://doi.org/10.23915/distill.00006
Technical Quality: 3
Clarity: 2
Questions for Authors: * Can you comment more on the relationship between your work and other, less compute intensive methods, that aim to allow more uniform convergence among the different data eigenvalues such as momentum? Is there an explicit connection here? Have you thought about that?
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: * The authors, to their credit, provide an extensive discussion of the limitations of their work in Appendix A. I agree largely with the points in that section and enjoyed their contextualisation of their work there.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > Q1: “Computing SVD is both time intensive and requires storing a wide range of previous parameters. It therefore requires a lot more compute, which is why such methods are typically not used”
We apologize for causing the reviewers to misunderstand the memory consumption and computational overhead of our method.
We would like to further clarify this point in the following:
**[Computation cost]** We would like to clarify that (1) **each SVD operation in our experiments takes roughly 1 second** by using **torch.linalg.svd**, and (2) the SVD operation is **not performed at every time step but rather at sparse intervals (e.g., $t_p \ge 1000$)**. The practical computation cost in wall-clock time is **less than 5%** of the total training time, which is worth it compared to the benefits SVD brings.
Moreover, the temporal SVD operations remain scalable as matrix size increases, thanks to the use of efficient linear algebra libraries. This scalability has been validated by numerous successful cases, such as the widely adopted large model fine-tuning technique, LoRA (Low-Rank Adaptation) [1]. The effective application of LoRA clearly demonstrates that SVD is not a significant issue in terms of memory overhead and computational cost.
**[Memory cost]** As described in Appendix B, PPBT does not require storing all historical policies but instead stores policies at sparse intervals. In our implementation, the size of the stored policy parameter matrix is $[2000, \approx 1e6]$, making the memory overhead entirely acceptable.
We have clarified this point in our draft.
> Q2: Can you comment more on the relationship between your work and other, less compute intensive methods, that aim to allow more uniform convergence among the different data eigenvalues such as momentum? Is there an explicit connection here? Have you thought about that?
In Q1, We provide the response to the misunderstanding on the “compute-intensive” point.
We are more than willing to discuss the relationship between our work and any concrete related work provided by the reviewer in the discussion stage.
---
Reference:
[1] LoRA: Low-Rank Adaptation of Large Language Models. 2021
---
Rebuttal 2:
Comment: Dear Reviewer,
We hope that you've had a chance to read our responses and clarification. As the end of the discussion period is approaching, we would greatly appreciate it if you could confirm that our updates have addressed your concerns.
---
Rebuttal Comment 2.1:
Title: Does our clarification address the reviewer's concern on the computational cost?
Comment: Our response clarified the computational cost of our method clearly. In practice, our method **adds minor additional computational costs** to the baseline algorithms we considered in our experiments. In addition, we provided our understanding and thoughts on the scaling of our work.
We believe that our response addressed the concern about the computational cost raised in the review.
We would greatly appreciate it if the reviewer could confirm that our response has addressed your concern on this point. Please let us know if there are any further issues. | Summary: The authors study how parameters evolve in Deep RL. They perform SVD on updates and find that parameters advance along a small number of directions. They then propose a method to trim the policy learning path by focusing the updates on these major directions. They show that their methods improve performance in MuJoCo and DMC.
Strengths: The work is a well-written exploration of an interesting and novel perspective on RL parameter updates. The authors are extremely thorough and clear with their investigation and show strong results on common benchmarks. Furthermore, their method seems very simple to implement, which is valuable to the community.
Weaknesses: Concerns with the paper:
1. The paper only investigates extremely dense reward settings. Intuitively, the conclusions of this paper should not apply to sparse(r) reward settings, which are arguably much more interesting in RL. (See Q1 below).
2. The results of the paper do not seem like they would be at all specific to RL. I'm not convinced that the observed phenomenon is not just a simple byproduct of using an optimizer with momentum. It would be good to have results that show that this phenomenon does not occur or help in supervised learning tasks (where I would imagine there is significantly more literature on this topic), and/or that this phenomenon still occurs when studying agents trained with plain SGD.
3. The significance of the results are unclear (see below)
Possible improvements:
1. It could be neat to replicate the empirical investigations when using your new method PPTB. Does PPTB actually address the issues presented and dampen parameter updates?
2. The results are not easy to read. It is hard to tell which results are statistically significant (I would recommend using standard error!) and also plotting standard RL training curves with the appropriate error regions. As-is, it's very hard to tell whether this method actually helps or not. In particular, it seems as though the error regions often overlap in Table 2, for example.
3. The writing is often vague. In the abstract, the authors write: "we study how the policy networks of typical DRL agents evolve during the learning process by empirically investigating several kinds of temporal change for each policy parameter". This vague sentence conveys little information. The authors also use the word "asymmetry" in the abstract without explaining what they mean (asymmetric with respect to what?).
Technical Quality: 3
Clarity: 4
Questions for Authors: 1. Do you think this would work equally well in sparse(r) reward settings? My intuition is that it would not, since the early parameter updates likely do not contain significant information about the reward. The continuous control tasks evaluated are *particularly dense*. Also, doesn't pruning the noisier directions harm exploration?
2. Related to the above, why do you believe this method does not exacerbate issues of primacy bias mentioned in the paper? The related work section's first paragraph does not really compare and contrast to the prior works, just mentions them.
3. Do you think these results apply beyond just RL? I see no reason why this phenomenon is RL-specific.
4. Related to Q3: Doesn't the fact that these methods use momentum (e.g. Adam) make this phenomenon obviously true? Does this phenomenon persist when using plain SGD? What about when you observe the *gradient updates* as opposed to the *parameter updates*?
5. Can you include the plots mentioned above? It would really help with my understanding of the paper.
Confidence: 3
Soundness: 3
Presentation: 4
Contribution: 2
Limitations: The authors address the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > Q1: “Do you think these results apply beyond just RL? I see no reason why this phenomenon is RL-specific. Doesn't the fact that these methods use momentum (e.g. Adam) make this phenomenon obviously true? Does this phenomenon persist when using plain SGD? What about when you observe the gradient updates as opposed to the parameter updates?”
We appreciate the reviewer for pointing out this. We agree that it will be interesting and valuable to extend our experimental investigation to more general deep learning settings.
To this end, we provide additional results in Figure 11 of the one-page pdf to present additional investigation for **Behavior Cloning** in D4RL halfcheetah-medium-replay **with Adam and SGD optimizers**.
Compared with the RL cases reported in our paper, the parameter update amount is more Gaussian-disribution like for BC (Adam). This indicates the difference in policy parameter update dynamics between BC (i.e., supervised learning) and RL.
Somewhat surprisingly, the asymmetry of parameter update amount turns out to be more severe in BC (SGD). This is a bit counterintuitive because Adam exhibits less asymmetry than SGD.
Moreover, we found that BC (Adam), BC (SGD) and the RL cases reported in our paper show different patterns in terms of temporal SVD evolvement.
In this paper, we start from and focus more on online RL and we defer the study across different learning paradigms in the future.
> Q2: “why do you believe this method does not exacerbate issues of primacy bias mentioned in the paper?”
In fact, the policy path trimming (PPT) method proposed in our paper can be viewed as a special way to do network resetting to alleviate the primacy bias as in [Nikishin et al., 2022].
Concretely, our method differs from the vanilla network resetting method in [Nikishin et al., 2022] on two points: (1) we do reset in a transformed subspace (with temporal SVD) rather than the original space; (2) we only reset the parameter update in the minor directions and maintain the learned information in the major directions.
From the perspective of plasticity, **our method periodically rolls back the plasticity of the network in the directions that are orthogonal to the ones that represent the effective knowledge learned so far**.
We appreciate the reviewer’s insightful comments and we believe that more study can be done in the future by taking into consideration both policy update subspace and the plasticity loss problem.
> Q3: “Do you think this would work equally well in sparse(r) reward settings? My intuition is that it would not, since the early parameter updates likely do not contain significant information about the reward. The continuous control tasks evaluated are particularly dense. Also, doesn't pruning the noisier directions harm exploration?”
We appreciate the reviewer for pointing out this inspiring point. First, we consider that the asymmetry in terms of parameter update amount and the concentration of temporal SVD information should be more severe in sparse-reward settings, as the self-distillation mechanism can dominate when the reward signal contains little information [1].
Second, our method does not aim to address the exploration problem and thus it is less likely to improve the learning performance especially when the baseline algorithm fails with very sparse rewards.
However, we do not think that our method necessarily harms exploration, because the trimming happens at a very sparse interval and moreover the effect of the parameter update of the minor directions in exploration is unclear yet. One possible understanding is that the periodic trimming rolls back the plasticity (as discussed in Q3) and could encourage learning new behaviors with the parameters in these minor directions.
We are also running additional experiments in several sparse-reward DMC tasks. We will present the results for this if we are able to finish this before the discussion stage ends.
---
Reference:
[1] DR3: Value-Based Deep Reinforcement Learning Requires Explicit Regularization. 2021
---
Rebuttal 2:
Comment: Dear Reviewer,
We hope that you've had a chance to read our responses and clarification. As the end of the discussion period is approaching, we would greatly appreciate it if you could confirm that our updates have addressed your concerns.
---
Rebuttal 3:
Comment: Thanks for the response! I had a quick question on your thoughts on my comments on statistical significance (the second point on improvements in the weaknesses). I don't see it in the response, but may have missed it. In general, the difference in performance seems extremely marginal, though I do understand that most RL benchmarks have long since saturated.
I do believe most of my concerns have been addressed, though would hope to get a quick answer to the above.
---
Rebuttal 4:
Title: Response to the statistically significance and the suggestion of using standard error
Comment: We appreciate the reviewer's further feedback!
> "It is hard to tell which results are statistically significant (I would recommend using standard error!) and also plotting standard RL training curves with the appropriate error regions"
Sorry for missing the response to this point in our rebuttal. Actually, we responded to the point of "standard error" in the Q1 of our rebuttal to the Reviewer iaGj, (but we missed it here).
In Line 279, we wrote, “We report the means and standard deviation errors across six independent trials”. **We meant to say “standard errors” rather than “standard deviation errors”**. This is a mistake in writing. We have amended it in our revision.
In our one-page pdf material, we provided the learning curves with means and standard errors (Figure 9 and Figure 10), which should help to show a more complete comparison than the scores we reported in the tables in our draft.
Please let us know if there are any remaining questions or concerns that we can address to improve your assessment. We are willing to discuss more with the reviewer to improve our work further.
---
Rebuttal Comment 4.1:
Comment: If the error regions refer to standard error, then aren't a lot of your results in Table 2 insignificant? The regions sometimes overlap heavily. In my head, I was converting the standard deviation to standard error (which means I divide the std by the square root of the number of seeds).
That being said, I think my key concerns have been addressed, and this point is a rather minor one (most RL benchmarks have long since saturated). It's honestly hard to tell if this is a significant improvement (looking also at Figures 9 and 10 in the author's rebuttal), but this is not the fault of the authors as much as it is the field. Because of this, I am raising my score.
---
Reply to Comment 4.1.1:
Title: We sincerely appreciate the reviewer's time and effort
Comment: We sincerely appreciate the time and effort you devoted to reviewing our work.
In our draft, we followed the evaluation scheme in previous works but we agree with the reviewer that more random seeds should improve the quality of our experimental evaluation further. We plan to add the seed number from 6 to 12 in these tasks.
Thank you for your valuable suggestions and we will make sure to take your suggestions to improve our paper. | null | null | null | null |
Revisiting Differentially Private ReLU Regression | Accept (poster) | Summary: This paper studies the problem of learning a planted (student-teacher) ReLU regression model, privately. The paper proposes two algorithms (DP-GLMtron and DP-TAGLMtron) to do this. For both algorithms, privacy utility tradeoffs are computed. The results do not explicitely contain the ambient dimension $d$, while they contain other terms as the "effective dimension", which allow to have bounds explicitely vacuous in the regime where there are more parameters (in this case, equal to the number of input dimensions $d$) than number of samples $N$
Strengths: This paper tackles a difficult and theoretically interesting problem, i.e. (in its generality) private, non-convex optimization. This paper focuses on the case of a planted ReLU regression model. The initial assumptions (see basically page 4) are, in my opinion, reasonable for this setting.
Weaknesses: The paper has, in my opinion, 4 major weaknesses:
- Its fit with the related literature is not always clear. The main improvement with respect to previous results is remarked in Table1, and the baseline is the recent and unpublished work [1]. In line 98, results on convex optimization are presented [2, 3]. This results are not presented with the dependence on the $l_2$ norm of the solution, and it is not mentioned that both results assume the input samples to have bounded $l_2$ norm, which is not the case in this work. This makes the comparison with the convex baseline unclear. Furthermore, comparison with [4] is not provided, which definitely offers a more solid baseline than [1].
- The analysis is not well motivated. The authors contribution is based on the two proposed algorithms, namely DP-GLMtron and DP-TAGLMtron. Considering the arguable practicality of such algorithms in realistic deep learning problems, why didn't the authors focused, for example, on providing bounds on DP-SGD instead? This makes the results being very specific to the toy setting of ReLU regression, for which the two algorithms are specifically designed for.
- While it is true that the provided bounds do not contain explicitely $d$, it is unclear the true dependence with the ambient dimension through other terms. Namely, in line 222 the authors set $\|x\|_2 \leq 1$, argued to be the improvement with respect to [1]. This allows to remove the dependence of $d$ from the trace of the covariance $H$. Authors also assume $\|w_0 - w^*\|_2$ to be bounded in corollary 4.3. Furthermore, the bounds also depend on $\Gamma$, which does not receive discussion on how this term is reasonably small.
- Empirical validation of the claims is relatively weak. In particular, in the main body, empirical results are presented on a synthetic dataset with quickly decaying covariance eigenvalues. The experiments on MNIST are pushed in Appendix B (there might be a typo in the third line of the caption of Figure 2, where it's mentioned that the data is Bernoulli), where the results are, in my opinion, puzzling. The baseline of DP-SGD is presented (which has to be improved with the two proposed algorithms to motivate the analysis), and the excess risk for $\varepsilon = 0.5$ increases as the number of samples increases (!). Furthermore, this excess risk is approximately $\sim 24$, which seems larger than simply random guessing. This dubious results make me suspect of a bad comparison with standard and well established algorithms as DP-SGD, pointing towards the second weakness mentioned in this list.
[1] - https://arxiv.org/pdf/2310.08425
[2] - https://proceedings.mlr.press/v32/jain14.pdf
[3] - https://arxiv.org/pdf/2006.06783
[4] - https://proceedings.mlr.press/v97/wang19c/wang19c.pdf
Technical Quality: 2
Clarity: 2
Questions for Authors: Can the authors provide extensive comparison with [4]?
Why didn't the authors focused on DP-SGD? Why it is useful to introduce these two algorithms?
Can the authors comment on their implicit assumption on $\Gamma$?
Can the authors elaborate on the experimental settings used for DP-SGD on MNIST data?
Confidence: 4
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: No potential negative societal impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the Reviewer M8TJ for the thorough review and the insightful comments.
**Response to the Weakness 1 and Question 1**
We thank reviewer M8TJ for the constructive feedback and we will include these comments in the updated version, thanks! We acknowledge that our main improvement is highlighted in Table 1, with [1] as the baseline. While [4] also studies DP-GLM with non-convex loss, it is not directly comparable to our work. Specifically, Theorem 5 of [4] assumes $\lVert x_i \rVert_2 \leq 1$, while we consider $x$ has a sub-Gaussian tail, meaning $\lVert x_i \rVert_2 \leq O(\sqrt{d})$ with high probability. Moreover, [4] provides a bound of $\tilde{O}(\frac{\sqrt[4]{d}}{\sqrt{n\epsilon}})$ dependent on $\sqrt[4]{d}$, which we aim to mitigate in the overparameterized setting. For Theorem 6 in [4], it still needs to assume $\lVert x \rVert_\infty \leq 1$ while we consider more general distributions that could be unbounded. Moreover, it needs a very strong assumption that $\lVert w^* \rVert_1 \leq 1$ as it uses DP-Frank Wolfe.
Regarding the results on convex optimization in lines 98 and references [2, 3], we will revise our paper to clarify differences in assumptions, particularly regarding the bounded norm of input samples, ensuring clear distinctions between our work and the convex baseline.
**Response to the Weakness 2 and Question 2**
We appreciate the reviewer's attention to our motivation. Before analyzing DP-GLMtron, we noticed DP-SGD sometimes fails to converge and reaches suboptimal solutions (as seen in our experiments). Therefore, we try to visualize the training trajectories of DP-SGD and DP-GLMtron on a 2D noiseless ReLU regression with symmetric Bernoulli data (we put the figures in the PDF file, please check it). The visualization demonstrates that DP-SGD struggles to converge under large noise conditions and is more likely to settle at a saddle point rather than reach the optimal solution.
As we mentioned in our paper section 4 (equation 3), the SGD and GLMtron follow the different update rules:
$$SGD: \mathbf{w}\_t=\mathbf{w}\_{t-1}-\eta \cdot (\operatorname{ReLU} (\mathbf{x}\_t^{\top} \mathbf{w}\_{t-1})-y\_t) \mathbf{x}\_t \cdot \mathbb{1}[\mathbf{x}\_t^{\top} \mathbf{w}\_{t-1}>0] $$
$$GLMtron: \mathbf{w}\_t=\mathbf{w}\_{t-1}-\eta \cdot (\operatorname{ReLU} (\mathbf{x}\_t^{\top} \mathbf{w}\_{t-1} )-y\_t ) \mathbf{x}\_t $$
When $\\mathbf{x}\_t^{\top} \\mathbf{w}\_{t-1} \leq 0$, the SGD gradient is zero, and DP-SGD's direction is heavily influenced by noise, especially with a low privacy budget (see Figure 1a). Therefore, these observations motivated us to propose the DP-GLMtron and DP-TAGLMtron algorithms.
**Response to the Weakness 3 and Question 3**
We appreciate the reviewer's attention to our approach to addressing dimension independence. **However, the reviewer may misunderstood our paper.** Generally, our bounds depend on the trace of the covariance matrix $\mathbf{H}$ rather than $\\|x\\|_2\leq 1$. In line 222, we just aim to provide a simple example to illustrate how the bounds behave under the assumption $\\|\mathbf{x}\\|_2 \leq 1$. However, this does not imply that our bounds improve previous results with this setting. Our main contributions are:
1. We provided the first analysis beyond data assumptions that satisfy Gaussian-like distributions, which may overlook cases where the spectrum exhibits decay.
2. Our results offer nearly dimension-independent bounds without assuming $\\|\mathbf{x}\\|_2 \leq 1$. This indicates that as long as $k^*$ is $o(N)$ and the tail summation of eigenvalues is $o(1/N)$, it is possible to achieve diminishing bounds without being affected by the dimension $d$ in the over-parameterized regime.
Why do we focus on the covariance matrix $\mathbf{H}$? We provide some examples here. If $\mathbf{H} = \operatorname{diag}\\{1, 1, \ldots, 1\\}$, its trace satisfies $\operatorname{Tr}(\mathbf{H}) = \mathcal{O}(d)$. If $\mathbf{H} = \operatorname{diag}\\{1, 1/2, \ldots, 1/d\\}$, its trace satisfies $\operatorname{Tr}(\mathbf{H}) = \mathcal{O}(\log d)$. These examples illustrate that only the effective rank influences utility, which explains why we can mitigate the impact of dimension $d$ in some cases.
Additionally, $\\|\mathbf{w}_0-\mathbf{w}^*\\|_2 $ (i.e., $\\|\mathbf{w}^*\\|_2$) to be bounded by a constant is a common assumption in the DP estimation even for the linear regression model such as [5,6,7].
Regarding the $\Gamma$ term, we provide analysis in Lemma D.3 for detailed discussion, where we establish that each $\gamma_t$ is effectively bounded by the $\mathbf{H}$ norm term $(\\|\\mathbf{w}\_*-\\mathbf{w}\_t\\|\_{\\mathbf{H}})$. This crucial detail ensures that our results remain controlled. Furthermore, by decomposing our analysis into the head and tail subspaces of $\mathbf{H}$, we can achieve bounded results analogous to those observed in non-private scenarios, as detailed in references [8,9,10,11].
**Response to the Weakness 4 and Question 4**
We understand your concern regarding our experimental setup. Below, we hope to address your concerns as follows:
1. Why do we have a synthetic dataset with spectrum decay in the main body? The simulation in our main body aims to validate the theoretical insights related to the role of eigenvalues in a high-dimensional setting, as we discussed in Section 4.
2. Typo of MNIST. Thanks for pointing out the typo, we will correct it in our revised version.
3. Why are results increasing with data size and approximately $\sim 24$? The observed increase in excess risk as data size grows, along with its approximation to 24, stems from the nature of our experimental setup. We are considering a ReLU regression model, as outlined in the Preliminary Section and Equation 1, without incorporating any neural network architecture.
Due to space limitations, we will include more explanations, experiment setup, and references in the comment section, please check it in the following.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their rebuttal. Here are some comments.
----
*Regarding the response to the Weakness 2 and Question 2*
I agree with the authors that one experimental setting can be found for which GLMtron performs better than SGD. Same for the corresponding DP versions. This experimental evidence is, however, extremely weak. How realistic is the setting of Bernoulli data? How realistic is this simplified model itself? To my understanding, it is not clear if DP-SGD really presents some *intrinsic* weakness on a class of realistic models (as ReLU neural networks), and it is not clear that the results on GLMtron can be extended to a not so-specific setting.
As an example, I believe DP-SGD performs relatively well on simple datasets as MNIST when considering a neural network with ReLU activation (see Abadi&al.), even in very over-parameterized models. Would DP-GLMtron beat the baselines of DP-SGD for an over-parametrized 2-layer neural network? This paper does not go close to answering this question (see my last point on experiments).
----
*Regarding the response to the Weakness 3 and Question 3*
I would like to remark to the authors that I still believe **I did not misunderstand the paper**. I know that the paper does not make the assumption $ || x ||_2 \leq 1$. However, I argue that the bounds are “dimension independent” only when the “effective dimension” is of constant order, which is (in my opinion) arguably surprising. In the (I would say important and not much described) case where we have $|| x ||_2 \sim \sqrt d$, and $tr(H) = d$ (let's say, standard Gaussian data), the number of input dimensions $d$ indeed enters the bounds.
I remark that I see the contribution of characterizing the utility privacy trade-offs as a function of the spectrum of the covariance, in ReLU regression. However, the weakness as I described it in my original review, in my opinion, still remains.
----
*Regarding the response to the Weakness 4 and Question 4*
I find the choice of the experiment of dubious relevance. If it is important to consider regression problems, why using the full MNIST dataset? Oppositely, If DP-GLMtron can defeat DP-SGD in wider settings, like MNIST, why not consider classification? I am not strictly asking for an additional experiment right now, as I believe it to be of difficult implementation during discussion time (if not impossible). However, the current experiment (comparing DP-SGD with DP-GLMtron on a classification task phrased as a regression problem with generalization errors of order 20), in my opinion, fails to corroborate the theoretical claims of this paper.
- - - -
To conclude, contribution on DP ReLU regression is to the best of my knowledge, novel (as I acknowledge the remarks of the authors on the differences with respect to [4]). However, I still believe the motivation of this work to be lacking, which represents the main weakness of this paper, together with the last two replies in this comment.
---
Rebuttal 2:
Title: Additional Rebuttals
Comment: We continue to provide more explanations here.
According to the updated rules of algorithms (see Equation 3) and further discussion in response to Weakness 2 and Question 2, this behavior illustrates that DP-SGD struggles to converge effectively in this context. As data size increases, the challenges with convergence lead to an increase in error, which highlights the limitations of DP-SGD in handling certain high-dimensional regression tasks.
Another potential reason for the error approximating ~24 is that our model is a regression model. Regression models output continuous values, but classification problems require discrete class labels. This can result in predictions that aren't directly usable for classification, which may contribute to the observed discrepancy in excess risk.
4. Experimental settings used for DP-SGD on MNIST data. Yes, we will provide the details of our experimental settings as follows:
A.Data Preparation
- Dataset: MNIST, loaded via tensorflow.keras.datasets.mnist.
- Data Preprocessing:
Training and test data are reshaped and normalized to have values between 0 and 1.
B.Experimental Parameters
- Data Sizes: The experiment uses a range of data sizes from 50 to 5000, increasing in steps of 500.
- Feature Dimension: Derived from the reshaped MNIST data.
- Repeats: The experiment is repeated 5 times to ensure robustness.
- lr = $0.01$
- Privacy Parameters:
- Delta: $1 \times 10^{-3}$
- Epsilon: An array with values $[0.05,0.2,0.5]$
- Sigma: Calculated as $\sigma=\sqrt{2 \log (1.25 / \delta)} \times(1 / \epsilon)$
C. Experimental Model Updates and Evaluation Metric (Our gradient update follows the rule of Equation 3 with private noise.)
- Model update: (SGD update)
if x_t @ thetasgd > 0:
gradient = (relu(x_t @ thetasgd ) - y_t) * x_t.T + noise
else:
gradient = np.zeros((featuredim, 1)) + noise
thetasgd = thetasgd - lr * gradient
- Evaluation Metric: (Mean Square Loss)
rwsgd = np.mean((relu(X @ thetasgd) - y) ** 2)
References
[1] - Shen, Hanpu, et al. "Differentially private non-convex learning for multi-layer neural networks." arXiv preprint arXiv:2310.08425 (2023).
[2] - Jain, Prateek, and Abhradeep Guha Thakurta. "(Near) dimension independent risk bounds for differentially private learning." International Conference on Machine Learning. PMLR, 2014.
[3] - Song, Shuang, et al. "Evading the curse of dimensionality in unconstrained private glms." International Conference on Artificial Intelligence and Statistics. PMLR, 2021.
[4] - Wang, Di, Changyou Chen, and Jinhui Xu. "Differentially private empirical risk minimization with non-convex loss functions." International Conference on Machine Learning. PMLR, 2019.
[5] Varshney, Prateek, Abhradeep Thakurta, and Prateek Jain. "(Nearly) Optimal Private Linear Regression via Adaptive Clipping." arXiv preprint arXiv:2207.04686 (2022).
[6] Cai, T. Tony, Yichen Wang, and Linjun Zhang. "The cost of privacy: Optimal rates of convergence for parameter estimation with differential privacy." The Annals of Statistics 49.5 (2021): 2825-2850.
[7] Bassily, Raef, et al. "Private stochastic convex optimization with optimal rates." Advances in neural information processing systems 32 (2019).
[8] Wu, Jingfeng, et al. "Last iterate risk bounds of sgd with decaying stepsize for overparameterized linear regression." International Conference on Machine Learning. PMLR, 2022.
[9] Zou, Difan, et al. "Benign overfitting of constant-stepsize sgd for linear regression." Conference on Learning Theory. PMLR, 2021.
[10] Bartlett, Peter L., et al. "Benign overfitting in linear regression." Proceedings of the National Academy of Sciences 117.48 (2020): 30063-30070.
[11] Wu, Jingfeng, et al. "Finite-sample analysis of learning high-dimensional single relu neuron." International Conference on Machine Learning. PMLR, 2023.
---
Rebuttal 3:
Comment: We thank Reviewer M8TJ for your reviewing efforts and feedback.
**Response to the Cont.W2 and Q2**
We appreciate the reviewer's concerns regarding our experiments. **However, it is important to note that our paper primarily focuses on theoretical work.** The experiments we conducted on Bernoulli data, which satisfy our assumptions for analysis, were designed to validate our theoretical insights into the role of eigenvalues in a high-dimensional setting.
Why do we focus on the ReLU regression model? Indeed, the ReLU regression model can be viewed as the same as a two-layer neural network model with a ReLU activation function and a single neuron in the hidden layer, as described in [1]. In particular, [1] considers the following formulation:
$$
f(\mathbf{W}, \mathbf{a}, \mathbf{x})=\frac{1}{\sqrt{m}} \sum_{r=1}^m a_r \sigma\left(\mathbf{w}_r^{\top} \mathbf{x}\right)
$$
where $\mathbf{x} \in \mathbb{R}^d$ is the input, $\mathbf{w}_r \in \mathbb{R}^d$ is the weight vector of the first layer, $a_r \in \mathbb{R}$ is the output weight, and $\sigma(\cdot)$ is the ReLU activation function defined as $\sigma(z)=z$ if $z \geq 0$ and $\sigma(z)=0$ if $z<0$. Since they assume that $a_r$ is not trained and focus on the empirical risk minimization problem with quadratic loss, it is equivalent to ReLU regression when setting $m=1$.
Therefore, ReLU regression can be considered a basic simplified problem in neural network optimization [1,2,3]. As we replied to Reviewer 7Kd7, one potential way to extend our work to ReLU Neural Network is to consider the $m$ neurons. In this case, the gradient for optimization would become a summation of $m$ gradients, each akin to the one used in our current analysis. This extension represents an area for future work.
Regarding the lack of experiments, we hope to address your concerns in the following part.
**Response to the Cont.W3 and Q3**
We appreciate the reviewer's clarifications and acknowledge that we may have misunderstood your feedback earlier. Yes, our paper offers a novel perspective by demonstrating that dimensionality can be mitigated in specific cases, such as when the data covariance matrix exhibits eigenvalue decay. This aspect might have been overlooked in previous Differential Privacy (DP) studies due to their reliance on data assumptions that resemble Gaussian-like distributions.
However, it is important to note that even in non-private regression models, the mitigation of dimensionality $d$ is typically contingent upon assumptions about the spectrum, as discussed in [3-10].
**Therefore, we do not view this as a weakness of our work.**
**Response to the Cont.W4 and Q4**
To better address your concerns, we have included additional experimental results on three different regression tasks, including one table here. (Because the PDF file in the global rebuttal cannot be edited during the discussion period, we will incorporate figures into our revised paper.)
The table below presents the errors observed for the Gas Turbine, CA Housing, and Wine Quality datasets when using various Differential Privacy algorithms: DP-SGD, DP-GLMtron, DP-TAGLMtron, and DP-FTRL. These experiments were conducted using the ReLU regression model across three different privacy budgets, represented by epsilon values of 0.05, 0.2, and 0.5, with a learning rate of 0.01. The errors reported were taken from the results at the final epoch (50th epoch).
It can be observed that DP-GLMtron and DP-TAGLMtron still consistently outperformed the DP-SGD and DP-FTRL across all datasets and privacy budgets, demonstrating their robustness and effectiveness. Additionally, the performance gap between DP-GLMtron, DP-TAGLMtron, and the other two algorithms indicates that DP-SGD and DP-FTRL may converge to suboptimal solutions.
| Dataset | Epsilon | DP-SGD | DP-GLMtron | DP-TAGLMtron | DP-FTRL |
|--------------|---------|--------|------------|--------------|---------|
| Gas Turbine | 0.05 | 12.24 | 8.76 | 9.42 | 13.58 |
| | 0.2 | 14.99 | 9.92 | 8.22 | 13.23 |
| | 0.5 | 15.84 | 9.49 | 9.09 | 15.34 |
| CA Housing | 0.05 | 29.37 | 16.46 | 6.39 | 20.47 |
| | 0.2 | 26.31 | 9.14 | 7.06 | 19.17 |
| | 0.5 | 14.90 | 6.18 | 7.42 | 18.91 |
| Wine Quality | 0.05 | 134.37 | 37.39 | 35.36 | 138.69 |
| | 0.2 | 117.28 | 35.55 | 34.62 | 138.01 |
| | 0.5 | 138.04 | 34.78 | 34.44 | 141.60 |
Reference
[1] Frei, Spencer, Yuan Cao, and Quanquan Gu. "Agnostic learning of a single neuron with gradient descent." Advances in Neural Information Processing Systems 33 (2020): 5417-5428.
---
Rebuttal Comment 3.1:
Comment: We provide more reference here.
[2] Diakonikolas, Ilias, et al. "Learning a single neuron with adversarial label noise via gradient descent." Conference on Learning Theory. PMLR, 2022.
[3] Wu, Jingfeng, et al. "Finite-sample analysis of learning high-dimensional single ReLU neuron." International Conference on Machine Learning. PMLR, 2023.
[4] Bartlett, Peter L., et al. "Benign overfitting in linear regression." Proceedings of the National Academy of Sciences 117.48 (2020): 30063-30070.
[5] Zou, Difan, et al. "Benign overfitting of constant-stepsize sgd for linear regression." Conference on Learning Theory. PMLR, 2021.
[6] Tsigler, Alexander, and Peter L. Bartlett. "Benign overfitting in ridge regression." Journal of Machine Learning Research 24.123 (2023): 1-76.
[7] Wu, Jingfeng, et al. "Finite-sample analysis of learning high-dimensional single ReLU neuron." International Conference on Machine Learning. PMLR, 2023.
[8] Zhang, Xiao, et al. "Learning one-hidden-layer relu networks via gradient descent." The 22nd international conference on artificial intelligence and statistics. PMLR, 2019.
[9] Cao, Yuan, et al. "Towards understanding the spectral bias of deep learning." arXiv preprint arXiv:1912.01198 (2019).
[10] Du, Simon, et al. "Gradient descent finds global minima of deep neural networks." International conference on machine learning. PMLR, 2019.
[11] Du, Simon S., et al. "Gradient descent provably optimizes over-parameterized neural networks." arXiv preprint arXiv:1810.02054 (2018).
[12] Cao, Yuan, and Quanquan Gu. "Generalization bounds of stochastic gradient descent for wide and deep neural networks." Advances in neural information processing systems 32 (2019).
[13] Huang, Yu, Yingbin Liang, and Longbo Huang. "Provable generalization of overparameterized meta-learning trained with sgd." Advances in Neural Information Processing Systems 35 (2022): 16563-16576.
---
Rebuttal Comment 3.2:
Comment: I thank the authors for their response. I follow up below.
- - - -
Regarding the motivation, I agree and acknowledge that this is a theoretical work. At the moment, however, the theoretical lesson the results of this paper deliver is that DP-SGD fails in ReLU regression over Bernoulli data, while another proposed algorithm does better. If the lesson is of broader interest (less specific data or model), I expect either the experiments or the theory to suggest this extension. This is something that, for now, I do not see (see my last point on the experiments).
I additionally remark that I am not complaining about the assumptions used to derive the theory (proving theorems in the most general setting is always difficult), I am complaining because I still do not believe the findings of this paper have wider application (even theoretical) than this restricted setting.
- - - -
Regarding the experiments, I would ask if the authors can share their code through an anonymized repository. I find the results puzzling. See for example the scores for on the wine quality dataset, where the targets are numbers (scores) between 0 and 10. Guessing the average score of 5 at test time gives a perfectly private policy with smaller test losses than the ones reported for all algorithms. It is possible I am not fully following the meaning of the reported numbers, and I would appreciate it if I could see the authors’ implementation.
---
Reply to Comment 3.2.1:
Comment: We thank Reviewer M8TJ for your active feedback.
We would like to clarify a few key points:
- Our analysis of the ReLU regression model can be viewed as a foundational step toward understanding a two-layer neural network with a single neuron [1]. It is not intended to be a specific model.
- A significant theoretical contribution of our work is the introduction of a novel analysis that extends beyond the typical data assumptions resembling Gaussian-like distributions. This analysis demonstrates that dimensionality can be mitigated in specific cases, such as eigenvalue decay, **an aspect that has been overlooked in previous DP communities.** Moreover, the assumptions we employ in our paper are more general and less restrictive than those in existing works [2-5].
- The experimental results from synthetic data, as well as classification and regression tasks, demonstrate a performance gap between DP-SGD/DP-FTRL and DP-GLMtron/DP-TAGLMtron. This suggests that DP-SGD and DP-FTRL may converge to suboptimal solutions not only in the specified setting.
Regarding the code, we have shared it with the AC through an anonymized repository. Please await the AC's response.
[1] Frei, Spencer, Yuan Cao, and Quanquan Gu. "Agnostic learning of a single neuron with gradient descent." Advances in Neural Information Processing Systems 33 (2020): 5417-5428.
[2] Shen, Hanpu, et al. ”Differentially private non-convex learning for multi-layer neural networks.”
arXiv preprint arXiv:2310.08425 (2023).
[3] Prateek Varshney, Abhradeep Thakurta, and Prateek Jain. (nearly) optimal private linear
464 regression via adaptive clipping. arXiv preprint arXiv:2207.04686, 2022.
[4] Di Wang and Jinhui Xu. On sparse linear regression in the local differential privacy model. In
469 International Conference on Machine Learning, pages 6628–6637. PMLR, 2019.
[5] Xiyang Liu, Prateek Jain, Weihao Kong, Sewoong Oh, and Arun Sai Suggala. Near optimal
445 private and robust linear regression. arXiv preprint arXiv:2301.13273, 2023. | Summary: The paper provides an algorithm for differentially private RELU regression. They claim that their results outperform DPSGD. Additionally, for the case of a small privacy budget, they provide a tree aggregation protocol that balances privacy and utility. Finally, extensive experimental results are provided to support the claims.
Strengths: 1) A novel algorithm is provided for differentially private RELU regression which outperforms DPSGD.
2) Detailed theoretical analysis is provided for the proposed algorithms.
3) Sufficient experimental evaluation is provided for the proposed algorithms with experiments on both synthetic and real-world data
Weaknesses: 1) There are a lot of recent works on privacy-utility tradeoffs which seem relevant to this paper. A literature survey involving works in that area can be helpful for future readers to further extend this work.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1) Can the authors propose ideas on how the proposed approach be extended to models with higher complexity such as deep neural networks?
2) The authors analyze the privacy-utility tradeoff in their work. They should also mention similar papers involving privacy-utility tradeoff algorithms such as
[a] Alireza Fallah, Ali Makhdoumi, Azarakhsh Malekian, and Asuman Ozdaglar Optimal and Differentially Private Data Acquisition: Central and Local Mechanisms Operations Research 2024 72:3, 1105-1123
[b] Ameya Anjarlekar, Rasoul Etesami, & R. Srikant. (2023). Striking a Balance: An Optimal Mechanism Design for Heterogenous Differentially Private Data Acquisition for Logistic Regression.
[c] Andrew Lowy, Zeman Li, Tianjian Huang, & Meisam Razaviyayn. (2024). Optimal Differentially Private Model Training with Public Data.
3) Typo on line 335
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 4
Limitations: NA
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the Reviewer JyY5 for the valuable review as well as the positive feedback!
**Response to the Weakness 1**
Thank you for your suggestion. We agree that adding a literature survey on privacy-utility tradeoffs would be beneficial. We will include more related work in the revised paper and hope to provide valuable context for readers.
**Response to the Question 1**
Yes. ReLU regression, as discussed in our paper, can be viewed as the same as a two-layer neural network model with a rectified linear unit (ReLU) activation function and a single neuron in the hidden layer, as described in [2]. In particular, [2] considers the following formulation:
$$
f(\mathbf{W}, \mathbf{a}, \mathbf{x})=\frac{1}{\sqrt{m}} \sum_{r=1}^m a_r \sigma\left(\mathbf{w}_r^{\top} \mathbf{x}\right),
$$
where $\mathbf{x} \in \mathbb{R}^d$ is the input, $\mathbf{w}_r \in \mathbb{R}^d$ is the weight vector of the first layer, $a_r \in \mathbb{R}$ is the output weight, and $\sigma(\cdot)$ is the ReLU activation function defined as $\sigma(z) = z$ if $z \geq 0$ and $\sigma(z) = 0$ if $z < 0$. Since they assume that $a_r$ is not trained and focus on the empirical risk minimization problem with quadratic loss, it is equivalent to ReLU regression when setting $m = 1$. Therefore, ReLU regression can be considered a basic simplified problem in neural network optimization[1,2,3].
**Response to the Question 2**
Thank you for pointing out these relevant works. We will include references to these papers in our revised manuscript to provide a broader context for the privacy-utility tradeoff algorithms discussed in our study.
**Response to the Question 3**
Thank you for catching the typo on line 335. We will correct it ("simulation") in the revised paper.
[1] Wu, Jingfeng, et al. "Finite-sample analysis of learning high-dimensional single ReLU neuron." International Conference on Machine Learning. PMLR, 2023.
[2] Frei, Spencer, Yuan Cao, and Quanquan Gu. "Agnostic learning of a single neuron with gradient descent." Advances in Neural Information Processing Systems 33 (2020): 5417-5428.
[3] Diakonikolas, Ilias, et al. "Learning a single neuron with adversarial label noise via gradient descent." Conference on Learning Theory. PMLR, 2022.
---
Rebuttal Comment 1.1:
Comment: Thank you for your detailed response
---
Reply to Comment 1.1.1:
Comment: We thank Reviewer JyY5 once again for your efforts and time. Please feel free to reach out if you have any further questions or suggestions. | Summary: This paper revisits the problem of differentially private (DP) ReLU regression in overparameterized regimes. The authors propose two novel algorithms, DP-GLMtron and DP-TAGLMtron, which outperform conventional methods like DPSGD. The paper provides theoretical analysis of these algorithms, including privacy guarantees and utility bounds. The authors extend their analysis beyond Gaussian-like data distributions to settings with eigenvalue decay, showing how data distribution impacts learning in high dimensions. The paper also includes empirical results on both synthetic and real-world datasets to validate their theoretical findings.
Strengths: - The proposed algorithm is noval and intuitive.
- The authors provide comprehensive theoretical analysis, and the results are concisely conveyed.
- The writing is good.
Weaknesses: - The assumptions used in this paper are, though relaxed, strong and may not always hold in practice.
- The simulation is weak and contains some mismatches with the theory.
Technical Quality: 2
Clarity: 2
Questions for Authors: - Can the authors provide the specific location of the claimed rate $\widetilde{O}(\frac{1}{\sqrt{N}}+\min \left(\frac{d^{1 / 2}}{(N \varepsilon)^{1 / 2}}, \frac{1}{(N \varepsilon)^{2 / 3}}\right)) $ in [1]? I scanned the paper and don't see how this rate appears. Thanks in advance :).
- What is the role of ReLU regression? Does it serve as a fundamental simplified problem for neural network optimization?
- The authors provide special cases of $D_{eff}^{priv}$. However, I would appreciate an explanation of its form. For instance, compared to $D_{eff}$, why is it not squared over $\lambda_i$, and why did the sum of inverse $\lambda_i$ appear?
- In the simulation (Figure 1), I noticed that in some cases (especially Figure 1(a)), the excess risk is increasing with respect to sample sizes. How does this happen?
- Again in the simulation, DP-GLMtron gradually matches the performance of DP-TAGLMtron as $\varepsilon$ increases, while this is not true for the theory, where DP-GLMtron does not have a guarantee when $\varepsilon$ is large. How does this happen?
- Following the last two questions, it seems that DP-GLMtron and DP-TAGLMtron are more sensitive to variations of $\varepsilon$ than DP-SGD. This is also true for the theory (for instance, Table 1 suggests that for the proposed methods and DPSGD, dependence is $\varepsilon^2$ and $\varepsilon^{2/3}$, respectively).
- Why did the authors choose the settings for $i^{-2}$ and $i^{-3}$ in the simulation?
[1] Differentially Private Non-convex Learning for Multi-layer Neural Networks.
Confidence: 4
Soundness: 2
Presentation: 2
Contribution: 3
Limitations: The limitations should appear in the main text.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank Reviewer 7Kd7 for the careful and detailed review as well as the constructive feedback.
**Response to the Weakness 1**
**We kindly disagree with the reviewer's opinion.** It is important to note that our work is theoretical, and our assumptions are standard in theoretical studies of ReLU regression or linear regression. As we mentioned in the paper, most assumptions are weaker than the sub-Gaussian data assumption. Even for linear regression models, certain assumptions are necessary to provide analysis on private estimation, such as [1,2,3].
**Response to the Weakness 2**
We understand the reviewer's concern about our experimental results and we hope to address this in the following answers (Question 4 part).
**Response to the Question 1**
Yes, the rate is provided in Theorem 5 of [4], which analyzes the performance of the DP-Projected Gradient Descent for ReLU regression :).
**Response to the Question 2**
Previous studies on differentially private (DP) statistical estimation have primarily focused on convex models, such as linear models or linear regression, with less attention given to non-convex models. ReLU regression is one of the most fundamental non-convex models in neural networks, which is why we chose to focus on it in our paper. More specifically, ReLU regression, as discussed in our paper, can be viewed as the same as a two-layer neural network model with a rectified linear unit (ReLU) activation function and a single neuron in the hidden layer, as described in [5]. Specifically, they consider:
$$
f(\mathbf{W}, \mathbf{a}, \mathbf{x})=\frac{1}{\sqrt{m}} \sum_{r=1}^m a_r \sigma\left(\mathbf{w}_r^{\top} \mathbf{x}\right),
$$
where $\mathbf{w}_r \in \mathbb{R}^d$ is the weight vector of the first layer, $a_r \in \mathbb{R}$ is the output weight, and $\sigma(\cdot)$ is the ReLU activation function defined as $\sigma(z) = z$ if $z \geq 0$ and $\sigma(z) = 0$ if $z < 0$. Since they assume that $a_r$ is not trained and focus on the empirical risk minimization problem with quadratic loss, it is equivalent to ReLU regression when setting $m = 1$. Therefore, ReLU regression can be considered as a basic and simplified problem in neural network optimization[6,7,8].
**Response to the Question 3**
We are glad to see the reviewer notice this difference. The term $D_{eff}$ arises from the variance error term, which is related to model noise. Specifically, it takes the form of $\mathbf{x}^{\top} z$. In contrast, $D_{eff}^{priv}$ originates from the private error introduced by the private noise $\mathbf{g}$. Thus, when considering how the noise terms interact with the data points $\mathbf{x}$, we observe the following: 1) model noise leads to $\mathbf{H}$; 2) private noise results in $\mathbf{I}$. This is the main reason why $D_{eff}^{priv}$ lacks the component $\lambda_i$ compared to $D_{eff}$. We provide more details in the Appendix (Lemma D.4).
**Response to the Question 4 and Weakness 2**
This phenomenon for DP-SGD is actually our motivation for studying the private ReLU regression problem. As we mentioned in our paper section 4 (equation 3), the SGD and GLMtron follow the different update rules:
$$SGD: \mathbf{w}\_t=\mathbf{w}\_{t-1}-\eta \cdot (\operatorname{ReLU} (\mathbf{x}\_t^{\top} \mathbf{w}\_{t-1})-y\_t) \mathbf{x}\_t \cdot \mathbb{1}[\mathbf{x}\_t^{\top} \mathbf{w}\_{t-1}>0] $$
$$GLMtron: \mathbf{w}\_t=\mathbf{w}\_{t-1}-\eta \cdot (\operatorname{ReLU} (\mathbf{x}\_t^{\top} \mathbf{w}\_{t-1} )-y\_t ) \mathbf{x}\_t $$
This indicates that if $\mathbf{x}\_t^{\top} \mathbf{w}\_{t-1} \leq 0$, the gradient for SGD becomes zero. Then, in DP-SGD, noise added on gradient can significantly influence the update direction, especially with a low privacy budget. Indeed, we visualized the training trajectories of DP-SGD and DP-GLMtron on a 2D noiseless ReLU regression (Please check the files in the global rebuttal). The visualization demonstrates that DP-SGD struggles to converge under large noise conditions and is more likely to settle at a saddle point rather than reach the optimal solution. This observation motivated us to propose the DP-GLMtron algorithm.
**Response to the Question 5**
Exactly, the DP-GLMtron can not provide the privacy guarantee when $\varepsilon$ is large, but here the performance refers to the utility/excess risk. A large $\varepsilon$ implies that less noise is added to the gradient, which means the gradient is less impacted by noise, improving performance. The primary difference between DP-GLMtron and DP-TAGLmtron lies in the method of noise accumulation. Therefore, when the noise is reduced, the performance gap between these two algorithms narrows.
**Response to the Question 6**
**We kindly disagree with the reviewer's opinion.** The utility bound depends not only on the privacy budget but also on the data size $n$ and the dimension $d$. It is not possible to separate these factors to independently demonstrate the efficiency of the algorithms. Thus, our results $O(\frac{1}{n^2\epsilon^2})$ is better than $O(\frac{1}{(n\epsilon)^\frac{2}{3}})$. Additionally, the rate in Table 1 for [1] is for DP-PGD rather than DP-SGD.
**Response to the Question 7**
The setting with two eigenvalue decay scenarios $i^{-2}$ and $i^{-3}$ satisfy our assumptions and we would like to validate our theoretical insights on the role of eigenvalues in a high-dimensional setting, as detailed in Corollary 4.3.
As discussed previously, existing work in the DP community primarily relies on data assumptions satisfying Gaussian-like distributions, which reduces the data covariance matrix to an identity matrix scaled by variance. Such an assumption may cause the case to be overlooked when the spectrum exhibits decay.
Therefore, our paper chooses one of the eigenvalue decay scenarios $i^{-2}$ and $i^{-3}$ in our stimulation to examine the effect of eigenvalue.
Due to space limitations, we will include references in the comment section, please check it in the following.
---
Rebuttal 2:
Title: References for the rebuttal
Comment: [1] Varshney, Prateek, Abhradeep Thakurta, and Prateek Jain. "(Nearly) Optimal Private Linear Regression via Adaptive Clipping." arXiv preprint arXiv:2207.04686 (2022).
[2] Cai, T. Tony, Yichen Wang, and Linjun Zhang. "The cost of privacy: Optimal rates of convergence for parameter estimation with differential privacy." The Annals of Statistics 49.5 (2021): 2825-2850.
[3] Bassily, Raef, et al. "Private stochastic convex optimization with optimal rates." Advances in neural information processing systems 32 (2019).
[4] Shen, Hanpu, et al. "Differentially private non-convex learning for multi-layer neural networks." arXiv preprint arXiv:2310.08425 (2023).
[5] Du, Simon S., et al. "Gradient descent provably optimizes over-parameterized neural networks." arXiv preprint arXiv:1810.02054 (2018).
[6] Wu, Jingfeng, et al. "Finite-sample analysis of learning high-dimensional single ReLU neuron." International Conference on Machine Learning. PMLR, 2023.
[7] Frei, Spencer, Yuan Cao, and Quanquan Gu. "Agnostic learning of a single neuron with gradient descent." Advances in Neural Information Processing Systems 33 (2020): 5417-5428.
[8] Diakonikolas, Ilias, et al. "Learning a single neuron with adversarial label noise via gradient descent." Conference on Learning Theory. PMLR, 2022.
---
Rebuttal 3:
Comment: Thanks for the detailed clarification.
- Regarding Response to Question 2: so does your work lead to any suggestions on NN optimization? (no criticism if no)
- Regarding Response to Question 6: but if the sample size is fixed, varying $\epsilon$ would result in a larger variation in your method? Because $\epsilon^2$ term leads to a larger slope compared to $\epsilon^{2/3}$. Also, in the simulation, your methods performed evenly to DP-FTRL when $\epsilon$ is small, but while outperform it significantly for large $\epsilon$.
---
Rebuttal 4:
Title: Response to Reviewer 7Kd7
Comment: We appreciate the Reviewer for the thoughtful feedback and hope the following responses address your concerns effectively!
**Response to the Cont.Q2**
Yes, we would like to further explore the relationship between our work and neural network optimization here. Theoretically, as we mentioned earlier(our response to Q2), ReLU regression can be viewed as the same as a two-layer neural network model with a single neuron. Therefore, to extend our work to NN optimization, one potential way is to consider the $m$ neurons. In this case, the gradient for optimization will become a summation of $m$ of our current gradient. Then, we can conduct an analysis analogous to that presented in this paper.
Empirically, our work offers a novel perspective on the over-parameterization problem, showing that the dimension $d$ can be avoided in certain cases, such as when the data covariance matrix exhibits eigenvalue decay. Additionally, our findings highlight that DP-SGD struggles with convergence and tends to settle at suboptimal points. It would be quite interesting to explore whether similar observations can be made with a two-layer neural network model.
**Response to the Cont.Q6**
To better address your question, we have included additional experimental results related to the performance of different algorithms under various privacy budgets. (We found that the PDF file in the global rebuttal cannot be edited during the discussion period. Therefore, we will incorporate these results into our revised paper.)
When the sample size is fixed, the utility in [1] can be understood as $1 /(N \epsilon)^{2 / 3}$, whereas our utility bound is $1 /(N \epsilon)^2$, which is an improvement over [1] according to the magnitude of $N$ and $\epsilon$.
In this additional experiment, we demonstrate the sensitivity of the privacy budget for each algorithm, and it is clear that DP-SGD faces the same issue. When $\epsilon$ is small, the tree mechanism in DP-FTRL allows it to add less noise compared to the naive Gradient Descent or GLMtron algorithm, resulting in similar performance between DP-FTRL and DP-GLMtron. Additionally, it is important to note that DP-FTRL should be compared with DP-TAGLMtron rather than DP-GLMtron, as both DP-FTRL and DP-TAGLMtron utilize the tree mechanism, whereas DP-GLMtron does not. Moreover, even though DP-FTRL converges at the end of the training, it can be observed that the excess risk for DP-FTRL is significantly larger than that of DP-TAGLMtron, indicating that it may converge to a suboptimal solution.
[1] Shen, Hanpu, et al. ”Differentially private non-convex learning for multi-layer neural networks.” arXiv preprint arXiv:2310.08425 (2023)
---
Rebuttal Comment 4.1:
Comment: Thanks and I am satisfied with the clarifications. I have updated my score.
---
Reply to Comment 4.1.1:
Comment: We would like to thank Reviewer 7Kd7 again for your valuable time in discussing the work, and we greatly appreciate your positive feedback! If you have any further questions or suggestions, please feel free to reach out. | null | null | Rebuttal 1:
Rebuttal: To all reviewers:
We would like to thank all the reviewers for their great efforts and insightful comments! Based on their suggestions, we address some common concerns and discuss them in the revised paper.
**1. Motivation for Proposing the DP-GLMtron Algorithm and Focus Away from DP-SGD:**
Our motivation for developing the DP-GLMtron algorithm stems from observations regarding the limitations of DP-SGD. Initially, we noticed that DP-SGD sometimes fails to reach the optimal solution and can even struggle to converge, as evidenced by our experimental results. To further explore this issue, we visualized the training trajectories of both DP-SGD and DP-GLMtron on a 2D noiseless ReLU regression with symmetric Bernoulli data. Please refer to the figures in the PDF for detailed illustrations.
The visualization in the attached PDF file shows that DP-SGD encounters difficulties in converging under conditions of high noise, often settling at a saddle point instead of reaching the optimal solution. This behavior is particularly evident when using a low privacy budget, as shown in Figure 1(a) in our paper.
As discussed in our paper (Section 4, Equation 3), SGD and GLMtron follow different update rules:
$$SGD: \mathbf{w}\_t=\mathbf{w}\_{t-1}-\eta \cdot (\operatorname{ReLU} (\mathbf{x}\_t^{\top} \mathbf{w}\_{t-1})-y\_t) \mathbf{x}\_t \cdot \mathbb{1}[\mathbf{x}\_t^{\top} \mathbf{w}\_{t-1}>0] $$
$$GLMtron: \mathbf{w}\_t=\mathbf{w}\_{t-1}-\eta \cdot (\operatorname{ReLU} (\mathbf{x}\_t^{\top} \mathbf{w}\_{t-1} )-y\_t ) \mathbf{x}\_t $$
This indicates that when $\\mathbf{x}\_t^{\top} \\mathbf{w}\_{t-1} \leq 0$, the gradient for SGD becomes zero. When considering DP-SGD, noise is added to the gradient, and the direction of DP-SGD is significantly influenced by random Gaussian noise. These observations motivated us to propose the DP-GLMtron and DP-TAGLMtron algorithms, which address these convergence issues and provide more reliable results under different noise conditions.
**2. Increase in Experimental Results with Data Size and Approximation to 24.**
The observed increase in excess risk as data size grows, along with its approximation to 24, stems from the nature of our experimental setup. We are considering a ReLU regression model, as outlined in the Preliminary Section and Equation 1, without incorporating any neural network architecture.
Moreover, according to the updated rules of algorithms (see Equation 3) and the above discussion, this behavior illustrates that DP-SGD struggles to converge effectively in this context. As data size increases, the challenges with convergence lead to an increase in error, which highlights the limitations of DP-SGD in handling certain high-dimensional regression tasks.
Another potential reason for the error approximating ~24 is that our model is a regression model. Regression models output continuous values, but classification problems require discrete class labels. This can result in predictions that aren't directly usable for classification, which may contribute to the observed large values in excess risk.
Pdf: /pdf/81bff6b47a5f2019aa3611375e48bb80ba6744c5.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Zero-to-Hero: Enhancing Zero-Shot Novel View Synthesis via Attention Map Filtering | Accept (poster) | Summary: This paper proposes a test-time approach that adaptively modifies the attention map during inference to enhance the consistency and plausibility of novel view synthesis. Experimental results on GSO demonstrate improved performance with the proposed method, and the authors conducted ablation studies to assess the impact of different modules on performance.
Strengths: 1. The authors carefully analyze the challenges and issues of the current zero-1-to-3 method on novel view synthesis.
2. The idea proposed in the paper to filter attention maps akin to SGD is interesting as it can enhance the robustness of attention maps to a certain extent.
Weaknesses: 1. Since (Re-)Sampling increases the computational load, with the hyperparameter R determining the number of resampling iterations, it would be beneficial for the authors to conduct an ablation study on how this parameter affects both results and computational overhead.
2. The method proposed by the authors has shown limited improvement in results. They need to validate the effectiveness of their proposed approach on a larger and more diverse dataset.
3. When conducting the ablation study, the authors' selection of only 10 objects from GSO could introduce bias into the results. It would be preferable for them to include all objects to ensure a more comprehensive evaluation.
4. The authors' proposal of Mutual Self-Attention (MSA), which has been used in other papers such as Consistent123, Tune-A-Video, and MasaCtrl, cannot be considered a unique contribution in their paper.
5. In the ablation study (Table 2), Mutual Self-Attention (MSA) shows the largest improvement, with a greater increase in PSNR (MSA 17.82 vs AMF 17.58, last two rows). This suggests that the method's improvement primarily stems from MSA rather than the core contribution point, AMF.
Technical Quality: 3
Clarity: 2
Questions for Authors: The paper includes several hyperparameters such as the number of resampling iterations (R) and the alpha for cross-step aggregation. How did the authors set these hyperparameters, and how significantly do they impact the final results?
Confidence: 5
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: The method proposed by the authors seems intriguing, but the actual improvement in results is limited. Moreover, the largest gain comes from a commonly used trick for enhancing viewpoint consistency, Mutual Self-Attention (MSA).
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer for insightful feedback and comments.
**1. Setting the Hyperparameters R and alpha.**
All hyperparameters were tuned based on a small random set of objects. Due to limited computational resources, the tuning process was not exhaustive. We found that the method's performance is not highly sensitive to these parameters.
- **Cross-step weight (alpha)**: We experimented with values ranging from 0.1 to 0.9 in increments of 0.1. Alpha determines the weight assigned to previous predictions in the cross-step aggregation. In Zero123-XL, early predictions were generally reliable, so a larger weight yielded better results. We maintained the same parameters for Zero-1-to-3. In new experiments conducted for the rebuttal, we applied our method to additional models such as ControlNet and MVDream. In both cases, a smaller weight generally produced better outcomes.
- **Resampling iterations (R)**: We observed that the model's performance is relatively insensitive to the choice of R, with values between 4 and 8 yielding similar results. While some objects benefited from larger values (`~10`), the overall improvement was minimal. Additionally, as shown in Figure 8 in the paper's appendix, large values of R (`~15-20`) can reduce diversity. For additional experiments with models like ControlNet and MVDream, we fixed R at 5 and did not test other values.
**2. Additional Computational Cost of Resampling.**
Resampling iterations are computationally equivalent to adding more denoising steps, as both linearly increase the number of function evaluations (NFE). In the paper, we addressed this by counting the NFE and keeping it comparable to the base model to ensure a fair comparison. Our chosen value for R, and the specific timesteps where we applied it, resulted in mapping 26 denoising time steps to a total of 66 NFE. For further details, please refer to our response #1 to reviewer UpfM, where we discuss the individual computational overhead of each module.
**3. The Extent of Quantitative Improvement.** Please refer to General Comment #4 for a detailed discussion of the quantitative improvements.
**4. Evaluating on Additional Dataset.**
We evaluated our approach on the RTMV benchmark [1], an out-of-distribution dataset consisting of 3D scenes, each contains 20 random objects. Here, we present the results for Zero123-XL, using the same hyper-parameters reported in the paper. Our evaluation shows improvements across all metrics compared to baselines. GSO and RTMV are the common datasets used for evaluation in Zero1-to-3 and its follow-ups.
| |T|NFE|PSNR|SSIM|LPIPS|IoU|
|-|-|-|-|-|-|-|
|Base|25|25|10.51|0.589|0.396|71.5%|
|Base|50|50|10.54|0.589|0.393|71.9%|
|Base|100|100|10.53|0.588|0.392|72%|
|Ours|26|66|11.18|0.619|0.372|73%|
**5. Including all objects in the ablation study.**
Thank you for your suggestion. In response, we have conducted the ablation study across all test objects, focusing on Zero123-XL. The new results corroborate the trends observed in our initial submission. We will ensure that the comprehensive evaluation is included in the camera-ready version.
|Hourglass|Resample|AMF|MSA|PSNR|SSIM|LPIPS|IoU|
|-|-|-|-|-|-|-|-|
|-|-|-|-|17.27|0.854|0.162|76.4%|
|+|-|-|-|17.44|0.855|0.161|76.6%|
|+|+|-|-|17.7|0.857|0.160|77%|
|+|+|+|-|17.92|0.858|0.157|77.8%|
|+|+|-|+|18.25|0.862|0.155|77.6%|
|+|+|+|+|18.35|0.864|0.153|78.3%|
**6. The usage of Mutual Self-Attention in Zero-to-Hero.**
We acknowledge that the formulation of MSA was not introduced in our work. We have cited prior works such as MasaCtrl in our paper, and we will include the additional references suggested by the reviewer. However, we wish to highlight the distinctions between previous works and our unique application of this technique.
To our knowledge, most prior works utilizing MSA fall into two categories:
* Training-free usage of MSA (e.g., MasaCtrl): In these works, MSA is typically applied after the general structure of the target is formed to transfer appearance details from the input to the target. In this scenario, MSA does not contribute to the initial structure formation of the target.
* Training or fine-tuning models with MSA layers (e.g., Consistent123, Tune-A-Video): These works incorporate MSA within the training or fine-tuning process.
Our approach is distinct as we employ MSA in a training-free manner, but crucially, we apply it from the beginning of the denoising process until the target structure stabilizes—a phase we term "Early-Stage Shape Guidance" in our paper. This early application of MSA leads to more stable results and often prevents the model from generating out-of-distribution outputs (like the leftmost and rightmost chairs in Figure 1 in the paper).
To illustrate the impact of early-stage MSA on structure, we conducted a simple experiment using Zero123-XL with 50 DDIM steps. We measured the effect of activating MSA at different stages of denoising on Intersection over Union (IoU) metric, noting that image quality metrics improved similarly. Our approach demonstrated a significant improvement in the structural integrity of the image. We believe that applying MSA in the later stages of the denoising process introduces a bias towards the input images, which can disrupt the shape and appearance.
|Method|Timesteps where MSA is applied|IoU|
|-|-|-|
|No MSA|-|76.4%|
|All the way|1000 to 0|76.6%|
|Start after the structure is initially formed|800 to 0|76.5%|
|Ours (from the beginning and terminate early)|1000 to 600|77.6%|
**7. The Contribution of MSA vs. AMF.**
MSA and AMF address different artifacts and complement each other. MSA's greater impact on metrics does not diminish AMF's value. The combined methods show consistent improvements. For more details, please see General Comment #1 and Figure 1 (right) in the supplementary materials.
[1] RTMV: A Ray-Traced Multi-View Synthetic Dataset for Novel View Synthesis.
---
Rebuttal Comment 1.1:
Comment: Dear Reviewer,
Please take a moment to read the rebuttal if you haven't already, and assess the new information provided by the authors. Then please provide your updated comments.
Thanks,
AC
---
Rebuttal Comment 1.2:
Comment: Thank you to the authors for the detailed response. The rebuttal addressed most of my concerns, and I am willing to raise the score.
---
Reply to Comment 1.2.1:
Comment: We thank the reviewer for their thoughtful feedback and for taking the time to review our rebuttal and engage with us. Your comments are invaluable for us. We are pleased we could address your concerns and appreciate that you revised your evaluation accordingly.
We are happy to address any further concerns. | Summary: This paper proposes a novel approach to generate realistic images from arbitrary views based on a single source image. The authors introduce a test-time method that enhances view synthesis by manipulating attention maps during the denoising process of diffusion models. This process improves geometric consistency and realism without requiring retraining or significant computational resources. The key contributions include an attention map filtering mechanism, a modification of the self-attention mechanism to integrate source view information, and a specialized sampling schedule. Experimental results show substantial improvements in fidelity and consistency, demonstrating the effectiveness of the proposed method.
Strengths: 1. The paper introduces a unique test-time method for enhancing view synthesis, specifically manipulating attention maps during the denoising process, which is innovative and effective.
2. The paper is generally well-written and easy to follow.
Weaknesses: 1. Lack of qualitive results in the experimental part and even in Appendix I cannot find more comparison.
2. Only compared and equipped with zero123 and zero123-XL is not enough also.
Technical Quality: 2
Clarity: 3
Questions for Authors: 1. Can Zero-to-Hero be equipped with other multi-view diffusion models, e.g,,MVDream, Wonder3D?
2. The quantitive metrics is calculated in which views?
3. Will Zero-to-Hero hurt the diversity of Zero123 model with relatively consistent outputs among different seeds?
Confidence: 3
Soundness: 2
Presentation: 3
Contribution: 3
Limitations: Please refers to W and Q. I will consider to raise my score if the author could address my concerns and provide more convinced results.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer for insightful feedback and comments.
**1. Additional Qualitative Results.**
We kindly refer the reviewer to General Comment #1 and the supplementary material, where we have included further qualitative results.
**2. Applicability of Zero-to-Hero in Multiview Diffusion.**
Thank you for raising this critical point. While Zero-to-Hero was initially designed for single-view generative models, our Attention Map Filtering can indeed be extended to other diffusion models, including multiview synthesis. Based on your suggestion and that of reviewer qQr5, we have implemented the attention map filtering mechanism for MVDream and two pre-trained ControlNet models, observing significant improvements. Please refer to General Comment #2 for more details on these implementations.
**3. Quantitative Evaluation — Additional Details.**
For our quantitative evaluation, we rendered 8 random views of each object and used each view as a source. For each source view, we generated the remaining 7 views, resulting in a total of 56 generation tasks per object (each defined by a unique source-target pair). Each target view was generated 3 times with different seeds, yielding a total of 168 images per object. We then averaged the scores across all views and objects to ensure a comprehensive evaluation.
**4. Effect of Zero-to-Hero on Generation Diversity.**
In general, excessive use of attention map filtering might reduce generation diversity. We have analyzed this in the appendix of our paper (Figures 8 and 9). However, we find that responsible usage usually preserves diversity. For instance, in Figure 1 of the main paper, the generated chairs display diverse back sides, and the turtles in the third row further demonstrate the model's ability to produce varied results. The balance between diversity and fidelity is complex and warrants further study. Often, the "diversity" observed in the base model includes artifacts and deviations from the real-world distribution, as illustrated in Figure 1 of the main paper and Figure 1(left) in the supplementary material.
---
Rebuttal 2:
Comment: Thank you for your comprehensive rebuttal. The majority of my concerns have been addressed, particularly regarding the details of the Quantitative Evaluation, the applicability of Zero-to-Hero in Multiview Diffusion, and the additional qualitative results. However, I remain not entirely convinced about the purported balance between "diversity" and "fidelity". The attention map filtering is applied to a multi-view diffusion model, whose diversity should be largely preserved to maintain its non-deterministic nature as a diffusion model. Considering all these factors, I have decided to revise my score from borderline reject to borderline accept. I intend to consult other reviewers' discussions before making a final decision and I am more than happy to discuss with other reviewers and AC, SAC and PC about the diversity concern.
---
Rebuttal Comment 2.1:
Comment: We thank the reviewer for their thoughtful comments and for taking the time to review our rebuttal, your feedback is invaluable to us. We are glad we could address your major concerns and appreciate your revised evaluation accordingly.
Regarding your concern about generation diversity, we would like to offer some additional clarifications.
While excessive resampling might reduce diversity (e.g. using large values of R), in practice we find that a moderate application of AMF is sufficient for improving fidelity while preserving diversity. Note that we only apply our filtering mechanism in the earlier steps of the denoising process, using a small value of R (5). **This enables our method to maintain diversity effectively.**
When evaluating diversity against base models, we observed that some results of the base models were highly implausible, as can be seen in Figure 1 in the paper. For example, Zero-1-to-3 produces out-of-distribution results, or results that are not aligned with either the input image or the target pose, leading to seemingly larger variance. However, this "diversity" is largely due to misalignment and artifacts, which comes at the expense of fidelity.
In our case, our model continues to generate diverse results that are both plausible and better aligned with the conditions (e.g. the various chair backs and turtles in Figure 1 of the main paper).
**Future work.** Regarding your point about preserving diversity in multiview-diffusion, we agree that it is very important. In future work we plan to explore the effect of AMF in multiview-diffusion in depth and per your suggestion include a thorough analysis of diversity vs. fidelity. In particular, MVDream is conditioned on text rather than an image, making the desired solution space inherently more diverse which should be accounted for in such an investigation. That being said, in the preliminary results of MVDream (and also in the ControlNet models) that we ran for the rebuttal, we observed that the diversity is well-preserved. For example, for the prompt “A man eating a pizza, highly detailed”, our method produced results that vary significantly in terms of viewpoint and appearance. This trend persisted across various prompts and seeds which we could not include in the rebuttal due to space limits.
We thank the reviewer again for engaging with us, and are happy to address any further concern. | Summary: This paper proposes attention map filtering to enhance the novel view synthesis performance of Zero-1-to-3. The method is composed of several changes to the original sampling method, but the main contribution is an aggregation of attention map strategy by resampling the same denoising step multiple times. The paper draws an analogy to SGD to argue that resampling resembles the batch training in SGD, enhances the stability of the sampling process, thus improving the performance. Controlled experiments show that the proposed technique is effective in improving the quality of the generation
Strengths: - The paper is well-written. Related works include sufficient references. The method is clear and well-structured. The figures are also well-made and clear. I had no trouble understanding the method and the results presented in the paper.
- The method is simple yet seems to be effective. The technique can be plug-and-play to apply to diffusion-based NVS methods such as Zero123.
- The analysis is great. The authors perform interesting analysis on attention maps of the denoising layers, the decreased diversity in generation, and limitations. I think these analysis helps a lot in building intuition behind the method and can be useful for future works to understand the nature of the problem.
- The experiments are controlled. Though the authors didn't compare against the various latest 3D generative models, they focus on evaluating the effectiveness of the proposed technique by running controlled studies. At a time when there are so many papers coming out every day, controlled and scientific experiments are valuable.
Weaknesses: - Following the analogy introduced by the authors between diffusion and SGD, it seems unavoidable that aggregating attention maps across multiple sampling steps will compromise the model on generation diversity. As we can imagine, optimization with a larger batch size is more stable. So, even though the authors have proposed techniques to mitigate the problem of lack of diversity, I believe it is a more fundamental limitation.
- Since the techniques proposed seem general to all diffusion models, what prevents one from applying the technique to all diffusion generation tasks? Given the vast amount of literature and problems being solved by diffusion models, the potential impact could be significant. Are there any specific constraints such that the method is only applicable to the problem of novel view synthesis?
- Does the method lead to lack of visual details in the generated views? I noticed that the eyes of the generated chickens are missing in the last row of figure 1. Is this a general problem of the proposed techniques?
- It's a little unfair to claim that the proposed method can maintain the same computational efficiency. The proposed techniques to accelerate the diffusion process is not exclusive to the proposed techniques and can be applied to the base model as well.
Technical Quality: 3
Clarity: 2
Questions for Authors: Please see the weakness section for questions.
Confidence: 5
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: The paper has discussed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer for insightful feedback and comments.
**1. Effect of AMF on Generation Diversity.**
We agree that excessive use of Attention Filtering (R>>1) may reduce generation diversity, as analyzed in the appendix. We will incorporate these insights into the limitations section. The balance between diversity and fidelity is a complex topic that warrants further study. Our findings indicate that "diversity" in the base model often includes artifacts and deviations from the real-world distribution, as illustrated in Figure 1 of the main paper and Figure 1 (left) of the rebuttal Supplementary Material. Responsible use of Attention Map Filtering usually helps preserving diversity while preventing results from deviating from the real distribution.
**2. Generality and applicability to other diffusion models.**
Thank you for highlighting this important point. Although our work focuses on novel view synthesis, we agree that the AMF module is potentially broadly applicable. This was indicated in the future work section. Inspired by your comment and by the request of reviewer 3GQx, we have extended Attention Map Filtering to other models beyond the task of novel view synthesis. Remarkably, we achieved noticeable improvements out-of-the-box in all cases. Please refer to General Comment #2 for more details.
**3. Visual details in generated views.**
We have not observed a general trend of lacking visual details in the generated views. Figures 1 in both the main paper and the supplementary materials demonstrate examples with fine details, such as the Android's antennas and the squirrel's eyes. The loss of the chicken's eye in one example may be due to resampling issues. While our pipeline usually mitigates the oversmoothing effect of vanilla resampling, it is not infallible.
**4. Computational efficiency.**
We acknowledge the reviewer's point regarding computational efficiency. Our primary goals were to improve quality and consistency while keeping generation times competitive to ensure applicability. We will clarify our claims in the paper and are open to further adjustments based on the reviewer's suggestions. An analysis of the computational cost of each module is provided in the response to reviewer UpfM. | Summary: This paper experimentally analyzes which parts are important and responsible for generation artifacts.
To solve it, this paper propose an attention map filtering process.
This process share the similar idea with SGD, which reduces the error of the generation process by repeated sampling.
This paper also propose some other things to enhance the results, including identity view information injection and a specialized sampling schedule.
The whole pipeline is training-free, so it’s very easy to apply to the pre-trained diffusion model.
Strengths: This paper propose a method to improve the novel-view diffusion model without external model and training, it is very easy to use and can be applied to different novel-view diffusion models easily.
The motivation of this article is clear, and the process and details of thinking and discovery is demonstrated.
Weaknesses: * The model has no training cost, but there is a lack of thorough analysis of the additional inference cost.
* Qualitative results are limited. Given that the quantitative results do not show much improvement, more qualitative comparisons would be better.
Technical Quality: 3
Clarity: 3
Questions for Authors: * Ablation studies of zero123 and zero123-XL showed different trends. MSA even impaired the results, but MSA with AMD improved the results. What is the reason do you think?
* Form the quantitative results, the improvement brought by MSA is significantly greater than that of AMF. Are there any qualitative results on MSA and AMF?
* How does this method affect the speed of the original diffusion model? Does the choice of R affect the model results?
* Fig 3 and 7 seem to share some of the same samples, maybe we can show more cases and perspectives?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: This method is limited by the generative capabilities of the pre-trained model, if the results wrong too much, it does not have the ability to correct it.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer for their insightful feedback and comments.
**1. Inference cost analysis**
Our proposed modules add a computational overhead to the base model. In the paper, we addressed this by counting the overall number of function evaluations (NFE) and keeping it on par with the base model to ensure a fair comparison. As requested, we now discuss the individual computational overhead of each module. We will incorporate this analysis into the manuscript for completeness.
(1) Resampling: Similar to the total number of denoising steps, T, the number of resampling iterations, R, linearly increases the NFE. Our chosen value for R and the timesteps at which we apply it resulted in mapping 26 denoising steps to a total of 66 NFE.
(2) MSA: The main additional cost of MSA is the necessity to generate the input view in addition to the target views, meaning the effective number of generated samples is increased by one.
(3) AMF: We implement attention map filtering in the upsampling part of the UNet and apply it during the early steps of the denoising process. We maintain two additional instances of each attention map: the attention map from the previous timestep (for cross-step updates) and the refined map from the current timestep (for in-step updates).
We provide a table showing the running times (in seconds) of Zero-1-to-3 and Zero-to-Hero for the same NFE (66). Overall, we manage to provide competitive running times. If both MSA and AMF are active (requiring the generation of the input), the running time is increased by approximately 1-1.5 seconds. If only AMF is active, the overhead is much smaller, averaging around 0.5 seconds.
|Samples Num|Zero123|Ours|
|-|-|-|
|1|2.2|N.A|
|2|2.9|3.1|
|3|3.5|3.9|
|4|4.3|4.9|
**2. Additional qualitative results.**
Please refer to Figure 1 in the supplementary material for additional qualitative comparisons of view synthesis by our Zero-to-Hero and the baseline Zero-123-XL. As can be seen, the improvement demonstrated by our method is consistent across objects and seeds. Further qualitative results demonstrating our method on other base models are discussed in General Comment #1.
**3. The extent of quantitative improvement.**
We have provided a clarification in General Comment #4.
**4. Qualitative demonstration of MSA and AMF.**
We appreciate this suggestion. Indeed, the difference in quantitative improvement between these two modules does not faithfully reflect their effect. We thus chose to share the response with all reviewers in General Comment #1 and Figure 1 (right) in the supplementary material.
**5. The effect of MSA and AMF on zero123 and zero123-XL.**
This is a good observation. Firstly, regarding the different effects of MSA and AMF, please refer to General Comment #3. Here, we address the reviewer's inquiry regarding the *difference in the modules' effects on both base models*. Our MSA (MSA with early termination) generally improves performance in both base models. As pointed out by the reviewer, in Zero-1-to-3, MSA did not improve PSNR and IoU while boosting SSIM and LPIPS. In Zero123-XL, the improvement was consistent across all metrics. A potential explanation for this difference may lie in the inherent bias of MSA towards the input view. Roughly speaking, MSA copies source details to improve generation consistency. Thus, too much of it might impair the results as it would lead to a bias towards the appearance and pose of the input image. To control this effect, we introduced early stopping. The termination step in our work was chosen based on Zero123-XL (where MSA consistently improved all metrics). While we found that the same parameters generalized well for the overall performance of Zero-1-to-3 without further tuning (the combined MSA and AMF improved all metrics and addressed the same issues as Zero123-XL in visual results), the ablation reveals that the parameters may not be optimal for the MSA effect alone in Zero-1-to-3.
**6. Choice of resampling steps R.**
We found that the model is not very sensitive to the choice of R, and values within the range of 4-8 provide similar results. While some objects benefited from larger values (`~10`), the overall improvement was minimal. Additionally, we found (as shown in Figure 8 in the appendix of the paper) that large values of R (`~15-20`) can limit diversity.
---
Rebuttal Comment 1.1:
Comment: Thank you for the response. The rebuttal addressed most of my concerns, and I am willing to keep my score.
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer for their thoughtful comments and for taking the time to review our rebuttal and engage with us. Your feedback is invaluable for us.
We are pleased we could address your concerns, and are happy to address any further concerns. | Rebuttal 1:
Rebuttal: We sincerely appreciate the reviewers and ACs for their efforts in reviewing our work.
We are encouraged by the recognition of our work's innovative perspective of attention map filtering as analogous to optimization process (hzMx, 3GQx), applicability and effectiveness (UpfM, qQr5). We are pleased that the thoroughness of our analyses was appreciated (UpfM, hzMx, qQr5). Below, we address the concerns raised by the reviewers.
**1. Additional results**
We provide additional qualitative and quantitative results in the Supp. These results reinforce the effectiveness of Zero-to-Hero and demonstrate its generalization to additional datasets and models. (1) Figure 1 (left): Additional view synthesis results of Zero-to-Hero. Figure 1 (right): results that exemplify the individual contributions of the proposed MSA and AMF. Figure (2): Our AMF applied to pose- and segmentation-conditioned generation pre-trained with ControlNet. Figure (3) Our AMF applied to Multiview synthesis model MVDream. (4) Evaluation on the RTMV, a dataset of 3D scenes. Please refer to our response to reviewer hzMx for further details.
**2. Zero-to-Hero generalization beyond NVS**
Although our work addresses the core limitations of single view synthesis models, the condition enforcing effect of our proposed modules are more general. As mentioned in the conclusions, we intend to leave the in-depth exploration of other applications to future work. However, as the generality of our method seemed to draw much interest by the reviewers we have conducted several preliminary experiments which demonstrate promising results. Remarkably, the integration of our proposed method into other base models was straightforward, demonstrating its applicability and simplicity.
(1) Pose- and Segmentation-conditioned image generation. A brief study of ControlNet models demonstrated that they suffer from similar limitations as zero123 and its follow ups. Namely, lack of condition enforcement and frequent appearance of visual artifacts. We implemented our proposed AMF module for two pre-trained ControlNet models and found that it robustly mitigates artifacts across various prompts and seeds. Please note that MSA is not immediately applicable for these models and thus was not used.
(2) Multi-view synthesis. We integrated AMF into MVDream, a text-to-multiview model, and found that it helps to mitigate the same issues as in the single view case. Similarly, MSA was not implemented.
**3. The Contribution of MSA vs. AMF**
Reviewers UpfM and hzMx, while the image quality metrics show a larger improvement with MSA, these numbers do not tell the whole story. AMF and MSA address different artifacts in the generation process and complement each other. The fact that MSA contributes more to the metrics does not take away from AMF's individual contribution. This complementary effect is demonstrated by the consistent improvement observed when both methods are used in tandem compared to using each individually.
- **MSA** (Mutual Self-Attention) transfers information from the input view to the target, assuming similar appearance and textures. This method is particularly effective when the input and target views are relatively close. As an example, Zero123 sometimes generates regions with plain black or random textures in the target views which MSA mitigates well, as shown in the first two rows of Figure 1 (right) in the Supp. We note that image quality metrics are more sensitive to this improvement and thus show more significant gains than when improvement in shape is achieved. However, MSA alone is usually insufficient for refining the pose or structure of the target. Its inherent bias towards the input shape and appearance may harm the results, especially when the change of viewpoint is significant. Our analysis therefore led us to utilizing MSA only during the early steps of the denoising process, named Early-Stage Shape Guidance in our paper.
- **AMF** (Attention Map Filtering), on the other hand, excels when the change in viewpoint is larger. While it may not always improve color and textures, AMF leads the model to produce more probable results that align better with the real distribution. We observed that most of the structure refinement is done by AMF. Unfortunately, none of the metrics measure plausibility, so this effect is not faithfully reflected by the evaluation metrics.
We have included a new figure demonstrating where MSA and AMF excel. The first two rows illustrate why MSA shows a larger improvement in image quality metrics, although it usually cannot fully resolve significant structural issues. Rows 3 and 4 show the structural improvement achieved with AMF. Finally, the last row shows a case where neither technique worked well enough on its own, but the combination did the work.
As further testimony to the significant effect of AMF, we refer the reviewers to general comment #2, where AMF is used to boost other generative models such as ControlNet and MVDream. These examples exhibit similar issues to Zero123 and demonstrate the role of AMF in mitigating them.
**4. The Extent of Quantitative Improvement**
Reviewers UpfM and hzMx raised a concern regarding the extent of improvement in the quantitative results reported in Table 1 of the main paper. While the absolute improvement may not seem large, it is important to put it in context to appreciate its significance. Zero123-XL improved upon Zero123 using the same base model by using 10x more data, achieving gains of [0.45, 0.003, -0.01, 2.9%] in PSNR, SSIM, LPIPS, and IoU, respectively. Our method achieved comparable gains [0.37, 0.008, -0.01, 1.7%] with no additional data and no further training. Remarkably, our method demonstrates similar and slightly larger gains when applied to Zero123-XL [0.63, 0.1, -0.01, 1.9%], a boost in performance that couldn't be achieved with merely more data. Also, the ratio of improvement is larger compared to other training-free methods (e.g. ViVid123).
Pdf: /pdf/aff88a92929a05275ee92bf13e519c2c8a6573c0.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
SA3DIP: Segment Any 3D Instance with Potential 3D Priors | Accept (spotlight) | Summary: The paper "SA3DIP: Segment Any 3D Instance with Potential 3D Priors" presents a novel method for 3D instance segmentation by incorporating geometric and textural priors to generate 3D primitives. It addresses the limitations of existing methods that rely heavily on 2D foundation models, leading to under-segmentation and over-segmentation issues. The proposed approach includes a 3D detector for refining the segmentation process and introduces a revised version of the ScanNetV2 dataset, ScanNetV2-INS, with enhanced ground truth annotations. Experimental results on various datasets demonstrate the effectiveness and robustness of the SA3DIP method.
Strengths: The strength of the paper lies in its innovative approach to integrating both geometric and textural priors for 3D instance segmentation, which significantly reduces initial errors and enhances the overall segmentation quality. The introduction of a 3D detector for refining segmentation further strengthens the method by addressing over-segmentation issues. Additionally, the revised ScanNetV2-INS dataset provides a more accurate benchmark for evaluating 3D segmentation models, contributing valuable data to the research community. The experimental results across multiple challenging datasets convincingly demonstrate the robustness and effectiveness of the proposed method.
Weaknesses: Despite its strengths, the paper has certain weaknesses. Firstly, it lacks sufficient innovation, as the framework closely resembles SAI3D [1], with the primary difference being the addition of a 3D detector. The entire Scene Graph Construction part is exactly the same as SAI3D. Furthermore, as shown in the ablation study, the overall performance improvement of the model heavily depends on the pre-trained 3D detector, which diminishes the originality and contribution of this paper.
[1] Yin, Yingda, et al. "Sai3d: Segment any instance in 3d scenes." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. In Section 3.1, the author mentions 'histogram vectors,' but the definition of these vectors is unclear in the article. It is not specified how these vectors are derived or what their dimensions are. Additionally, it is unclear why they can effectively represent the features of each superpoint for calculating the affinity score. Clarification on these points is necessary to understand their role and significance in the methodology.
2. In Table 1, SAMPro3D is the method that is numerically closest to our work. Why is there no visual comparison in Figure 4?
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: The authors have addressed the limitations of their work. They highlight that using only 3D priors results in short execution times but can lead to an excessive number of superpoints, complicating the merging process. Furthermore, they acknowledge that the affinity matrix based on 2D masking relies heavily on the accuracy of 2D foundational segmenters and suggest that a more robust merging algorithm or better utilization of various 2D foundational models could be promising future directions.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the constructive and detailed feedback.
**W: Innovation of the proposed approach**\
In Fig. 1 of our paper, we demonstrate the existing problem of previous methods, which use under-segmented 3D primitives and inherit the over-segmented 2D masks to the final 3D instance segmentation. We believe our method is problem-oriented, trying to **alleviate these two major defects.**\
We therefore design two modules accordingly:
1) Complementary primitives generation module to generate more accurate and finer-grained 3D primitives to avoid any error accumulation.
2) Introducing the 3D space prior for providing instance-aware constraint, which was implemented by a 3D detector.
The combination of both modules works great at solving the aforementioned problems.
**W: Overall performance gain analysis**\
It is true that from ablation study the performance gain seems to heavily depend on 3D detector. However, we believe that the performance gain of the complementary primitives module **is not as minor as it looks**. We think it is related to the definition of the metric AP (ratio of correctly identified instances to the total number of identified instance), **which is more in favor of under-segmentation rather than over-segmentation, since the former introduces relatively fewer false instance.** We randomly choose two scenes (scene0011_00 & scene0644_00) and conduct random 10%, 20%, 30% over/under segmentation tests based on their GT instances. Results are averaged from three experiments and shown in the table. APs with subscript **O** stand for over-segmentation results, and **U** for under-segmentation.
| | mAP_O | AP25_O | AP50_O | mAP_U | AP25_U | AP50_U |
|---------|--------|--------|--------|--------|--------|--------|
| **Scene0011_00**| | | | | | |
| 10% | 87.2 | 87.2 | 98.1 | 92.4 | 96.2 | 96.2 |
| 20% | 71.8 | 71.8 | 98.1 | 84.5 | 92.3 | 92.3 |
| 30% | 62.3 | 62.3 | 98.1 | 85.4 | 92.3 | 92.3 |
| **Scene0644_00**| | | | | | |
| 10% | 88.1 | 88.1 | 96.7 | 97.6 | 98.8 | 98.8 |
| 20% | 73.6 | 73.6 | 98.3 | 86.9 | 92.9 | 92.9 |
| 30% | 60.3 | 60.3 | 98.3 | 69.8 | 82.1 | 82.1 |
It can be observed that under whatever percentage the under-segmentation seems to give higher APs, due to **its high precision and fewer false positive cases.** While for over-segmentation, it gives **fewer precision and higher recall.** This is consistent with our results, which give finer-grained (20% more in average, shown in the table below) primitives but slightly lower APs. We show the number of instances in comparison with SAI3D in the table.
| Count | Min | Max | Avg || Min | Max | Avg |
|---------|--------|--------|--------|--------|--------|--------|--------|
| **Primitive**| | | | **Final result** | | |
| SAI3D| 159 | 3905 | 1068 | SAI3D| 6 | 258 | 59 |
| ours | 243 | 3989| 1272 |ours | 5 | 159| 45 |
Upon closer look in the ablation study in our paper, adding only our complementary 3D superpoint primitives module even slight deters the performance, due to the AP metric. However, our final results after experiencing whole pipeline (after adding 3D space prior) **reverse the minor drop and produce an extra gain at around 1% compared to adding only 3D detection.** The counts for instances decrease as well. The superior performance and decreased instance number indicate that our results achieve both high recall and high precision. It proves that he combining of finer-grained (and slightly over-segmented) primitives and instance-aware refinement (merge those over-segmented primitives) provides a thorough solution.
**Q1: Histogram vectors clarification**\
Histogram vector represents **a distribution of 2D mask ids** which are covered by the projection of a 3D primitive. It reflects which 2D mask has the correspondence with the current 3D primitive. The histogram vectors are then used to calculate the affinity (similarity) matrix for each pairs of 3D primitives, **indicating the likelihood that two primitives belong to the same object (the same 2D mask).**\
In the implement of codes, assume that for a scene there are M 2D images and N 3D points. For the m-th image, we project the N points to the image using the corresponding pose and camera intrinsic. We record the 2D mask label for every 3D point according to which pixel it is projected onto (0 for invisible points in the image). After this we get a (N, M) matrix, indicating the 2D mask label for every point in every 2D view. \
Next, we load all K 3D primitives in the scene. Again for the m-th image, we record the 2D mask labels each primitive covers according to the (N, M) matrix we get earlier, since multiple labels may be covered by one single primitive due to the ambiguity or inaccuracy in 2D masks. We maintain a normalized matrix of size (K, V) for each view, where V refers to the 2D mask counts in the view. **This matrix of size (K, V) is actually the histogram vector for all primitives in the m-th view.**
| | 2D mask #1 | 2D mask #2 | ...... | 2D mask #V |
|---------|--------|--------|--------|--------|
| **Primitive #1**| 0 | 0.5 | ...... | 0 |
| **Primitive #2**| 0.3 | 0.6 | ...... | 0 |
| **......** | ...... | ...... | ...... | ...... |
| **Primitive #K**| 0.1 | 0 | ...... | 0.6 |
In the end, we use the (K, V) matrix for cosine similarity calculation to obtain the affinity matrix of size (K, K) which represents the likelihood of each pair of primitives.
**Q2: Visual comparison of SAMPro3D**\
We showcase the comparison in **Fig. 3 of global rebuttal PDF.** Our method clearly alleviates the over-segmented instances appear in SAMPro3D, which is consistent with our initial purpose of introducing the 3D instance-aware space prior.
---
Rebuttal Comment 1.1:
Comment: Dear reviewer cYA9,
Thank you very much for reviewing our paper and giving us some good questions. We have tried our best to answer all the questions according to the comments. Especially, we conduct more ablation experiments on 3D priors and visual comparison with SAMPro3D.
As half of the discussion time has already passed, we would be very grateful if you could respond to our rebuttal and offer us an opportunity to further engage with your concerns and address any additional questions you might have!
Thank you for your time and feedback!
Best,
Authors
---
Rebuttal Comment 1.2:
Comment: Thank you for the clear explanation of histogram vectors and the visual comparison of SAMPro3D. Your response effectively answered my questions. | Summary: The paper proposes a pipeline to perform open-vocabulary 3D instance segmentation of scenes, incorporating geometric and RGB information. The method is based on constructing a super-points graph, which is then refined by SAM and a 3D detector (V-DETR). Also, the paper provides an enhanced version of ScanNetV2, correcting and extending the annotations. The method outperforms other SAM based baselines.
Strengths: Given the popularity of ScanNetV2, releasing a more curated and fine-grained annotation seems a useful contribution. The proposed method is a careful combination of components, in a cascade of steps that seems effective.
Weaknesses: - As a non-expert in 3D scene segmentations, I find it difficult to understand the novelty of the proposed approach, whereas other works also rely on SAM. From my understanding, the key difference seems to be the exploitation of 3D prior to incorporating the 3D classifier, producing the main impact in instance awareness by constraint. If this is the case, I am unsure about the significance of the technical contribution, and I would ask for further clarification on this.
- The paper does not provide enough documentation or analysis of the new annotations for ScanNetV2, which is critical since it is one of the core contributions of the paper. Figure 3 is insufficient to understand the quantitative statistics of the performed effort. On the new dataset, the methods perform worst, which could suggest that the new labels are more difficult/detailed, but more evidence is required to confirm this. To prove the dataset's usefulness, I would suggest comparing methods trained on ScanNetV2 and ScanNetV2-INS and incorporating them in the paper more statistics (e.g., the difference in the number of categories, how many instances for each of these, ...).
Technical Quality: 2
Clarity: 2
Questions for Authors: Adding on the observations reported in the previous section:
1) The method requires posed images. Is this a requirement also for the competitors?
2) Images are often difficult to parse: e.g., Figure 2 contains many small images with several colors, and its flow is not linear. I would suggest providing a more schematic overview with larger figures and including in the appendix this detailed version.
Confidence: 3
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: The method briefly discusses the proposed method's main limitation but not the dataset.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the constructive and detailed feedback.
**W1: Novelty of the proposed approach**\
It is true that other methods, including ours, rely on SAM, and **this heavy reliance of 2D segmentation results is exactly what we were trying to avoid by introducing the 3D prior.** In Fig. 1 of our paper, we demonstrate the existing problem of the previous methods (like SAI3D and SAMPro3D), which inherit the over-segmented 2D masks to the final 3D instance segmentation. \
Thus, our method is problem-oriented, trying to **alleviate two major defects** existing in the pipelines of the current works through 3D priors.
1) The under-segmented 3D primitives and subsequent error accumulation
2) The part-level over-segmented tendency of the 2D foundation segmentator.
We therefore design two modules accordingly. First, we use the complementary primitives generation module to generate more accurate and finer-grained 3D primitives to avoid any error accumulation. Second, we introduce the 3D space prior for providing instance-aware constraint, which we choose to implement using a 3D detection. The combination of both modules works great at solving the aforementioned problems.
**W2/Limitation: Documentation or analysis of the new annotations for ScanNetV2**\
We need to claim that all methods of this type (such as SAMPro3D, SAI3D, ours and so on) are zero-shot ones, which means they utilize the generalization capability of 2D foundation model and do not depend on training. Thus, the ScanNetV2-INS dataset we propose is simply of a evaluation use, since their output instance segmentation results do not change when switching the ground truth file. We will elaborate the details about ScanNetV2-INS dataset as follows:
1) **Meaning**: There are two major deficiencies in the original ScanNetV2 dataset: missing instances and incomplete instance masks. To solve these, we assigned new class-agnostic labels to the missing ones and re-labelled the incomplete instance masks.
2) **Tasks**: Following the recent methods of utilizing 2D foundation model to perform zero-shot 3D class-agnostic instance segmentation, our ScanNetV2-INS dataset also focus on this specified task and for evaluation only.
3) **Format**: We generate a txt file (e.g., sceneid.txt) for each scene in the original ScanNetV2 validation set. The txt file contains N separate lines corresponding to N points in the scene, where the number of each line is calculated as: Number = instance id (refers to n-th object in the scene) + 1000 * semantic label id (refers to scannetv2-labels.combined.tsv). The newly labelled instances were assigned a semantic label id of 41 since we focus on class-agnostic segmentation.
4) **Limitation**: Our dataset focus on the evaluation use of 3D class-agnostic instance segmentation, due to the recent trend of utilizing 2D foundation segmentator to perform zero-shot 3D class-agnostic instance segmentation and the high expense of annotation on 3D data.
5) **More statistics**: In Fig. 3 of our paper, we demonstrate how many scenes with more than (10, 20, ..., 100) instances in ScanNetV2 and ScanNetV2-INS. Our proposed dataset clearly provides larger number of scenes with more instances. We will give more statistics below. The first table below illustrates the instance count of the original ScanNetV2 and ScanNetV2-INS. It can be seen that our ScanNetV2-INS dataset incorporates more instances. In the second table, we show the number of instances with varying point counts within specified ranges for two datasets, ScanNetV2 and ScanNetV2-INS. ScanNetV2-INS dataset features more smaller objects, which requires the model to have finer-grained instance perception capabilities. The statics from these two tables could, to some extent, explain why the ScanNetV2-INS dataset is more challenging than ScanNetV2.
| Instance Count | Min | Max | Avg | Total |
|---------|--------|--------|--------| ------ |
| ScanNetV2| 2 | 47 | 14| 4364 |
| ScanNetV2-INS| 2 | 54 | 17 | 5596 |
| Point # Per Instance | <500 | 500-1000 | 1000-2000 | 2000-5000 | 5000-10000 | >10000 |
|---------|--------|--------|--------|--------|--------|--------|
| ScanNetV2| 252 | 452 | 1119|1690|567|284|
| ScanNetV2-INS| 692 | 748 | 1366 |1873|626|291|
**Q1: Requirement of the posed images**\
Yes, posed images are also required by other competitors. As we clarified in W2/Limitation above, this type of methods (such as SAMPro3D, SAI3D, ours and so on) are zero-shot ones, which means they **utilize the generalization capability of 2D foundation model**, since to date there is no 3D foundation model due to the limited 3D labelled data. The correspondence between 2D and 3D space is the focus of these methods. Thus, posed images are essential for acting as a bridge between 2D foundation models and 3D space. We will include this discussion in the final version of our paper.
**Q2: Further figure clarification**\
In general, the whole pipeline shown in Fig. 2 of our paper can be separated in three parts. Step A: 3D primitives generation exploiting both geometric and textural priors. Step B: scene graph construction where the primitives serve as nodes and affinity matrix of 3D primitives guided by 2D masks generated using 2D segmentators as edge weights. Step C: Region growing and instance-aware refinement on the constructed scene graph. **We provide a more straightforward version with larger images in Fig. 1 of global rebuttal PDF document.**
---
Rebuttal Comment 1.1:
Title: Post-Rebuttal
Comment: I thank the author for clarifying their contributions and method. I think showing a significant increment of instances with few points is a reasonable measure to suggest that the new dataset has, at least to some extent, a further complexity.
I find this sentence a bit confusing: *Thus, the ScanNetV2-INS dataset we propose is simply of evaluation use since their output instance segmentation results do not change when switching the ground truth file.*
From my understanding, ScanNetV2-INS provides more fine-grained annotations to evaluate the methods. These can also be used to train (or fine-tune) the models. Hence, I am unsure why the authors suggest that this dataset is designed for evaluation purposes only.
Thanks again to the authors for their time and clarifications.
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer 3tui,
Thanks so much for your quick response and feedback, and we are really happy to hear that we were able to address some of your concerns!
For the sentence you mentioned, the original ScanNetV2 dataset actually **holds a big number of scenes (1202 in train split, 312 in validation split and 100 in test split).** Recent methods, such as SAM3D, SAMPro3D, SAI3D and ours, follow a **zero-shot** pipeline that lifts the 2D segmentation results from SAM to the 3D scenes. Therefore, these methods do not require the training process or the 3D scenes in training split, and could be (and actually are) performed **directly on the validation set of 3D dataset,** and to compare the metrics of 3D instance segmentation.
Thus, the ScanNetV2-INS dataset we propose consists of only the revised version of 312 validation scenes. Sure it is also possible to use our dataset for a normal model which requires training by replacing the original val set to ours, **since the format of gt labels of our dataset is consistent with the original one.** But the initial motivation of proposing the dataset is to provide a fairer comparison between the aforementioned no-training methods (thus on the validation set only). This is why we suggest that the new dataset is of a evaluation use, in the context of these no-training methods.
Once again, thank you for your quick response and additional feedback! In the remaining discussion period, we would be very glad to address any additional questions that may arise, or any clarifications needed.
Authors | Summary: This study introduces SA3DIP, a novel 3D instance segmentation model based on SAM. SA3DIP leverages texture priors from point cloud color channels to generate complementary primitives and incorporates 3D spatial priors when merging 2D masks by integrating a 3D detector. These enhancements enable SA3DIP to generate superior superpoints and mitigate the over-segmentation problem found in previous SAM-based 3D instance segmentation methods.
Strengths: (1) Using SAM to extract 2D masks from RGB-D frames and merge them into a final 3D segmentation result is common in 3D OV segmentation methods. However, few previous methods are geometry-aware during the merging process, highlighting the significance of SA3DIP's incorporation of 3D priors.
(2) Constraints from 3D spatial priors substantially improve performance on both the ScanNetV2 and ScanNetV2-INS datasets.
(3) The low quality of ScanNetV2 ground-truth segmentation results has been a persistent problem. A 3D segmentation dataset with more accurate ground truth, like ScanNetV2-INS, is demanding.
Weaknesses: (1) Using RGB values only as texture prior are not robust enough due to their susceptibility to variations caused by lighting conditions, shadows, reflections, and object materials.
(2) The ablation study shows that the performance gain from Complementary 3D superpoint primitives is not significant compared to other modules in SA3DIP.
(3) This paper reports only the class-agnostic instance segmentation results of SA3DIP. However, the previous benchmark method (SAI3D [1]) also uses semantic instance segmentation as a typical evaluation metric.
[1] Yin, Y., Liu, Y., Xiao, Y., Cohen-Or, D., Huang, J. and Chen, B., 2024. Sai3d: Segment any instance in 3d scenes. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 3292-3302).
Technical Quality: 3
Clarity: 3
Questions for Authors: More comprehensive experiments are needed to validate the effectiveness of the Complementary Primitives Generation module. Additionally, the authors should further discuss the motivations for using color value similarities as texture priors.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: The authors adequately addressed the limitations in section 4.4.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the constructive and detailed feedback.
**W1/Q: RGB values as texture prior are not robust enough / Motivations for using color values / More experiments**\
In the first place, our motivation is that we found **distinct instances with similar normal often exhibit different color.** Meanwhile, the commonly used 3D indoor datasets have only xyz and rgb attributes for their 3D points. Thus, we opt to explore RGB values as texture prior.\
It is true, as concerned also by other reviewers, that texture prior such as RGB values are not robust enough when being used solely. We have conducted further experiments about the priors and their weights, shown in **Tab. 1 of the global rebuttal PDF document.** In our approach we assign less weight to the textural prior, thus to exploit it while minimizing its negative impact. For ScanNetV2, we set the geometric weights ($W_n$) as 0.96 and texture weights ($W_c$) as 0.04 (Line 214). Under this setting complementary primitives generation module yields a good initial state for the subsequent merging and refinement. We have conducted more experiments on Matterport3D (table below) and Replica dataset **(Tab. 2 of the global rebuttal PDF document).**
| Matterport3D | $W_n$ | $W_c$ | 3D Space Prior | mAP | AP25 | AP50 |
|---------|--------|--------|--------|--------|--------|--------|
| OpenMask3D| /| / | / | 15.3 | 28.3 | 43.3 |
| OVIR-3D | / | / | / | 6.6 | 15.6 | 28.3 |
| SAM3D | 1 | 0 | / | 10.1 | 19.4 | 36.1 |
| SAI3D | 1 | 0 | / | 18.9 | 35.6 | 56.5 |
| **Ablation of ours** | | | | | | |
| Ours #1| 1 | 0 | yes | 19.8 | 36.6 | 56.2 |
| Ours #2 | 0.9 | 0.1 | / | 18.1 | 35.7 | **62.3** |
| Ours #3 | 0.9 | 0.1 | yes | **20.6** | **38.3** | 61.0 |
It can be seen in the last row of **three tables** that, our performances on all three datasets exceed that using only geometry prior. We will include this discussion in the final version of our paper.
**W2: Non-significant gain of complementary 3D superpoint primitives module**\
In our approach, we designed the complementary 3D superpoint primitives module for alleviating the existing problem about under-segmented 3D primitives based only on geometry and subsequent error accumulation. \
For the non-significant gain of this module, it is related to the definition of the metric AP (ratio of correctly identified instances to the total number of identified instance). **This AP metric is more in favor of under-segmentation rather than over-segmentation, since the former introduces relatively fewer false instance.** To further illustrate this, we randomly choose two scenes (scene0011_00 & scene0644_00) and conduct random 10%, 20%, 30% over/under segmentation tests based on their GT instances. Results are averaged from three experiments and shown below. APs with subscript **O** stand for over-segmentation results, and **U** for under-segmentation.
| | mAP_O | AP25_O | AP50_O | mAP_U | AP25_U | AP50_U |
|---------|--------|--------|--------|--------|--------|--------|
| **Scene0011_00**| | | | | | |
| 10% | 87.2 | 87.2 | 98.1 | 92.4 | 96.2 | 96.2 |
| 20% | 71.8 | 71.8 | 98.1 | 84.5 | 92.3 | 92.3 |
| 30% | 62.3 | 62.3 | 98.1 | 85.4 | 92.3 | 92.3 |
| **Scene0644_00**| | | | | | |
| 10% | 88.1 | 88.1 | 96.7 | 97.6 | 98.8 | 98.8 |
| 20% | 73.6 | 73.6 | 98.3 | 86.9 | 92.9 | 92.9 |
| 30% | 60.3 | 60.3 | 98.3 | 69.8 | 82.1 | 82.1 |
It can be observed that under whatever percentage the under-segmentation seems to give higher APs, due to **its high precision and fewer false positive cases.** While for over-segmentation, it gives **fewer precision and higher recall.** This is consistent with our results, which give finer-grained (20% more in average, shown in the table below) primitives when exploiting both geometry and texture but slightly lower APs. We show the number of instances in comparison with SAI3D in the table.
| Count | Min | Max | Avg || Min | Max | Avg |
|---------|--------|--------|--------|--------|--------|--------|--------|
| **Primitive**| | | | **Final result** | | |
| SAI3D| 159 | 3905 | 1068 | SAI3D| 6 | 258 | 59 |
| ours | 243 | 3989| 1272 |ours | 5 | 159| 45 |
However, our final results when experiencing whole pipeline (after adding 3D space prior) **reverse the minor drop and produce an extra gain at around 1% compared to adding only 3D detection.** The counts for instances decrease as well. The superior performance and decreased instance number indicate that our results achieve both high recall and high precision. It proves that he combining of finer-grained (and slightly over-segmented) primitives and instance-aware refinement during merging process (merge those over-segmented primitives) provides a thorough solution.
**W3: Semantic instance segmentation**\
We have conducted the semantic instance segmentation on ScanNet200 dataset, better aligning our capability with previous benchmark method (such as SAI3D). The results are provided in the following table. Our method clearly results in better performance than other methods on AP, head and tail.
| Method | AP | AP50 | AP25 | Head(AP) | Common(AP) | Tail(AP) |
|---------|--------|--------|--------|--------|--------|--------|
|**Closed mask**|||||||
| OpenMask3D| 15.4 | 19.9 | 23.1 |17.1|14.1|14.9|
|**Open-vocab. mask**|||||||
| OVIR-3D| 9.3 | 18.7 | **25.0** |9.8|9.4|8.5|
| SAM3D| 9.8 | 15.2 | 20.7 |9.2|8.3|12.3|
| SAI3D| 12.7 | 18.8 | 24.1 |12.1|10.4|16.2|
| Ours| **13.5** | **20.4** |*24.8*|**14.9**|**11.6**|**16.9**|
---
Rebuttal Comment 1.1:
Comment: Dear reviewer sUBY,
Thank you very much for reviewing our paper and giving us some good questions. We have tried our best to answer all the questions according to the comments.
As half of the discussion time has already passed, we would be very grateful if you could respond to our rebuttal and offer us an opportunity to further engage with your concerns and address any additional questions you might have!
Thank you for your time and feedback!
Best,
Authors | Summary: The paper introduces SA3DIP, a novel method for 3D instance segmentation that leverages both geometric and textural priors to enhance the accuracy of segmentation tasks. The goal is to improve open-world 3D instance segmentation by addressing the limitations of current methods, which often result in under-segmentation and over-segmentation due to the limited use of 3D priors.
SA3DIP integrates both geometric and textural priors to generate finer-grained 3D primitives, reducing initial errors that accumulate in subsequent processes. ItIncorporates constraints from a 3D detector during the merging process to rectify over-segmented instances, maintaining the integrity of objects in 3D space. It also introduces a revised version of the ScanNetV2 dataset, termed ScanNetV2-INS, with enhanced ground truth labels for more accurate and fair evaluations of 3D class-agnostic instance segmentation methods.
Finally, extensive experiments on ScanNetV2, ScanNetV2-INS, and ScanNet++ datasets demonstrate the effectiveness and robustness of SA3DIP, achieving significant improvements in segmentation accuracy over existing methods.
Strengths: Enhanced 3D Instance Segmentation Pipeline:
- The SA3DIP pipeline incorporates both geometric and color priors to generate complementary 3D primitives.
- Introduces a 3D detector to provide additional constraints during the merging process, addressing over-segmentation issues.
ScanNetV2-INS Dataset:
- A revised version of the ScanNetV2 dataset with improved annotations, providing a more accurate benchmark for evaluating 3D instance segmentation methods.
- Rectifies incomplete annotations and incorporates additional instances to better reflect real-world scenarios.
Robust Performance:
- Demonstrated superior performance in 3D instance segmentation through extensive experiments on multiple datasets.
- Achieved competitive results, significantly outperforming existing methods in terms of mAP (mean Average Precision), AP50, and AP25 scores.
The SA3DIP method addresses the limitations of previous approaches by fully exploiting the potential of 3D priors, leading to more accurate and reliable 3D instance segmentation results. The improvement of the prior dataset is cleverly done and the overall architecture is sound. The dataset is going to be valuable to the community for future work.
Weaknesses: 1. Obfuscation of Feature Contributions:
- The enhancement metrics contributions by individual features are not clearly delineated. Ablation studies can be performed more meticulously to attribute the contribution of each feature individually.
2. Super Primitives Definition:
- A more precise and comprehensive definition of super primitives could be provided to enhance understanding and reproducibility.
3. Progressive Region Refinement Examples:
- Including examples of progressive region refinement could illustrate the process and its effectiveness more clearly.
4. 2D to 3D Space Integration:
- While the method backprojects 3D space metrics into 2D space, the potential of lifting 2D space into the 3D object space was not fully explored. This approach could be considered and discussed to justify the design choices made.
These points highlight areas where the methodology can be further refined and expanded to provide clearer insights and potentially improve performance.
Technical Quality: 3
Clarity: 3
Questions for Authors: What are the main assumptions made in workflow and reasons as well as rationale for key design choices ?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: It is not clear what level of demarkation of 3d segmentation is considered. Can it work for subparts within a scene such as within a chair. A level of detail seems to be one of the main deficiencies. Additionally it is not clear what the complexity of the pipeline is in terms of computation. Only one scene is shown here. What would be metrics of how much improvement across the dataset can be - an estimate would be good across a randomly chosen dataset.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the constructive and detailed feedback.
**W1: Ablation studies on each feature individually**\
We have conducted ablation studies on each feature we intend to exploit (e.g., geometry, texture and 3D space prior). The results are provided in the following tables. APs with subscript **1** stand for ScanNetV2 dataset, and **2** for ScanNetV2-INS. We assigned several weights for geometry and texture to test their contribution. Specifically, we conduct the config with $W_n$=0.4 and $W_c$=0.6 which yields the similar number of 3D primitives as the primitives used in SAM3D, SAI3D and others, for a fair comparison. The experiments show that the config with $W_n$=0.96 and $W_c$=0.04 suits best for our approach. More experiment on Matterport3D is shown below as well, and experiments on Replica is in **Tab.2 of the global rebuttal PDF.**
| Geometry | $W_n$ | Texture | $W_c$ | 3D Space Prior | mAP_1 | AP25_1 | AP50_1 | mAP_2 | AP25_2 | AP50_2 |
|----------|------|---------|------|----------------|----------------|-------|-------|----------------|-------|-------|
| √ | 1 | × | 0 | × | 30.8 | 50.5 | 70.6 | 28.9 | 49.2 | 69.7 |
| × | 0 | √ | 1 | × | 10.4 | 18.1 | 32.5 | 9.5 | 17.0 | 31.1 |
| √ | 0.4 | √ | 0.6 | × | 27.3 | 47.4 | 69.8 | 25.6 | 46.3 | 69.4 |
| √ | 0.96 | √ | 0.04 | × | 29.3 | 49.2 | 70.5 | 27.4 | 48.3 | 70.4 |
| √ | 1 | × | 0 | √ | 40.8 | 63.6 | 80.7 | 35.9 | 57.8 | 75.4 |
| × | 0 | √ | 1 | √ | 12.7 | 22.1 | 37.2 | 11.0 | 19.7 | 34.1 |
| √ | 0.4 | √ | 0.6 | √ | 39.1 | 62.7 | 80.2 | 33.5 | 56.3 | 75.0 |
| √ | 0.96 | √ | 0.04 | √ | **41.6** | **64.6** | **81.3** | **36.1** | **58.6** | **76.3** |
| Matterport3D | $W_n$ | $W_c$ | 3D Space Prior | mAP | AP25 | AP50 |
|---------|--------|--------|--------|--------|--------|--------|
| OpenMask3D| /| / | / | 15.3 | 28.3 | 43.3 |
| OVIR-3D | / | / | / | 6.6 | 15.6 | 28.3 |
| SAM3D | 1 | 0 | / | 10.1 | 19.4 | 36.1 |
| SAI3D | 1 | 0 | / | 18.9 | 35.6 | 56.5 |
| **Ablation of ours** | | | | | | |
| Ours #1| 1 | 0 | yes | 19.8 | 36.6 | 56.2 |
| Ours #2 | 0.9 | 0.1 | / | 18.1 | 35.7 | **62.3** |
| Ours #3 | 0.9 | 0.1 | yes | **20.6** | **38.3** | 61.0 |
It can be observed that texture prior are not robust enough when being used solely due to influence on shadows, reflection and so on. Thus, we adopt the complementary primitives module, exploiting both texture and geometry. A greater weight for geometry and less for texture yield a good initial state for the subsequent merging and refinement. It can be seen in the last row that with 3D space prior, the performance exceeds that using only geometry prior.
**W2: Super Primitives Definition**\
In the context of our paper, 3D superpoints/primitives refer to clusters of 3D points. The points within the same group exhibit homogeneity on certain attributes. The purpose of generating primitives rather than using raw points is to introduce prior knowledge (such as geometry and texture attributes we use) and reduce computational complexity to the subsequent process.
**W3: Progressive Region Refinement Examples**\
In Fig. 2.C of the paper we have provided one visual example that proves the effectiveness of our refinement process. We showcase the detailed visualization at each stages in the region growing and refinement process in **Fig. 1 bottom of the global rebuttal PDF document.**
**W4: 2D to 3D Space Integration**\
Due to inaccuracies in camera poses, the point clouds generated from posed RGB-D images are not perfectly aligned in 3D space. This misalignment can introduce noise when lifting 2D results into 3D space, as seen in methods like SAM3D. In contrast, projecting 3D mesh-sampled point clouds into 2D space experiences less influence from such inaccuracies. We showcase a example of SAM3D in **Fig. 2 of global rebuttal PDF document.**, which uses only 2D to 3D projection. It is clear that the 2D to 3D projection introduces a lot of noise, and this explains why recent methods, including ours, tend to explore 3D to 2D projection.
**Q: Main assumptions & reasons for key design choices**\
In general, we have made two major assumptions in our pipeline:
1) At the Complementary primitives generation stage, it is assumed that points belong to the same semantic or instance labels share similar inherent attributes (geometry, texture and so on).
2) At the instance-aware refinement stage, it is assumed that the points inside the 3D bounding boxes have high confidence to belong to the same instance.
The reasons for the key design choices are elaborated as follows:
1) **Attributes and weights for primitive generation**: In the first place it is based on the observation that distinct instances with similar normal often exhibit different color. We believe it is a promising solution to the under-segmented primitives. While afterward we chose to assign greater weight to the geometric prior and less weight to the texture prior, as explained also in W1, due to the variations introduced by lighting conditions, shadows and so on.
2) **SAM variant for 2D segmentation**: We use instance-level segmentation function of Semantic-SAM for ScanNetV2 dataset since not so many scenes contain a large amount of small objects, and we opt to SAM-HQ for ScanNet++ dataset due to its high-resolution scenes along with detailed objects.
---
Rebuttal Comment 1.1:
Comment: Dear reviewer oUe8,
Thank you very much for reviewing our paper and giving us some good questions. We have tried our best to answer all the questions according to the comments.
As half of the discussion time has already passed, we would be very grateful if you could respond to our rebuttal and offer us an opportunity to further engage with your concerns and address any additional questions you might have!
Thank you for your time and feedback!
Best,
Authors | Rebuttal 1:
Rebuttal: We thank all the reviewers for their valuable feedback, we appreciate their detailed suggestions. We reply to each reviewer’s questions and concerns in the individual responses, and we have added tables and figures in the attached rebuttal PDF, which we reference and explain in the responses.
We would like to emphasize that our work is **problem-oriented, motivated by the commonly existed defects in SAM3D, SAMPro3D, SAI3D and so on.** We therefore design two modules accordingly:
1) Complementary primitives generation module to generate more accurate and finer-grained 3D primitives to avoid any error accumulation.
2) Introducing the 3D space prior for providing instance-aware constraint, which was implemented by a 3D detector.
The visualization of Fig. 4 in our paper demonstrates that the two purposes have been achieved and indeed the problems could be alleviated through our approach.
Here, we also would like to provide an overview of the material in the attached document:
**Figure 1** – Re-drew schematic overview of the proposed pipeline, **in response to Reviewer 3tui** who found it difficult to follow the original Fig. 2 in our paper. The top blocks give a simplified flow of our method, and under which is the detailed version with images. We also show a full progressive merging and refinement process in the yellow dashed box at the bottom, **in response to Reviewer oUe8** who required progressive region refinement examples.
**Figure 2** – Visual examples from SAM3D, **in response to the discussion with Reviewer oUe8** about the potential of 2D to 3D space integration.
**Figure 3** – Visual comparison with SAMPro3D, **in response to Reviewer cYA9**. This visualization, along with Fig. 4 in our paper, fully demonstrates the effectiveness of our approach at alleviating the error accumulation caused by under-segmented 3D primitives and over-segmented 3D instance due to the transferred knowledge from the part-level masks of 2D foundation segmentation model.
**Table 1** - Detailed ablation study on prior we intend to explore. We hope this could ease the concern by most reviewers about the effectiveness of texture prior or the complementary primitive generation module.
**Table 2** – More experiment and ablation on **Replica** dataset. This further proves the robustness and generalization capability of our approach.
Pdf: /pdf/2b9993341944b854eb3a8472aaf17c45a09729eb.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
OASIS: Conditional Distribution Shaping for Offline Safe Reinforcement Learning | Accept (poster) | Summary: This paper discusses the safe dataset mismatch (SDM) problem, highlighting how low-reward or unsafe samples in datasets can harm offline safe RL. Conditional distribution shaping (OASIS) is proposed to mitigate this problem by generating high-reward and safe samples via diffusion models and promoting general offline safe RL algorithms with the generated data. This paper evaluates OASIS through extensive experiments across various safe RL tasks and different types of datasets with varying data distributions.
Strengths: - The paper is well motivated. The influence of imbalance and biased data on offline safe RL is an important but underexplored problem. Solving the proposed problem through diffusion-based data generation is intuitive, reasonable and novel.
- The empirical evaluation and ablation studies are comprehensive, carefully demonstrating the significance of the SDM problem and the effectiveness of the proposed approach.
- Theoretical analysis provides certain guarantees for the proposed approach.
- The paper is well written and well organized.
Weaknesses: - Some technical and experimental details are a bit confusing. See Questions 1, 2, 3, 4.
- Assumptions 1 and 2 seem kind of idealized to directly bound the distribution and policy discrepancy, since $\epsilon_{score}$ and $\epsilon_{inv}$ cannot be directly calculated or estimated. Admittedly, analyzing the distribution and policy discrepancy based on diffusion-generated data may be difficult and beyond the scope of this work. Maybe more discussion and explanation could help justify these assumptions.
- Directly excluding mismatched data from training datasets could be an another intuitive approach for SDM problem. So it would be better to discuss the performance if tempting and conservative data were directly excluded from the full datasets, utilizing the remaining datasets (i.e., Full dataset - Hybrid dataset) for training. A comparison between these two approaches (i.e., adding matched data vs. reducing mismatched data) would be valuable.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. What is the size of the generated dataset $D_g$? I would like to confirm whether you directly train the policy on $D_g$ or a mixture of $D_g$ and the original datasets?
2. How do you calculate the reward condition $\hat R$ based on the given cost $\hat C$?
3. Why are the thresholds in Figure 5 different for different types of datasets? Did you use the same threshold when constructing different types of datasets? This is a concern because one tempting dataset constructed under the thresholds 20 may be considered tempting for one threshold 20 but not for another threshold 60.
4. In Figure 6, how did you obtain the performance of baseline methods (e.g., CDT) under a specific $\alpha$? Did you use the data of size $\alpha$ generated by OASIS for these baselines? It would be helpful to provide more details about the data efficiency experiments.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The limitation are discussed in the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We gratefully thank the reviewer for recognizing the novelty, comprehensive experiment validation, and theoretical analysis contribution of our work. We provide our response to the comments below:
> W2: Assumptions 1 and 2 seem kind of idealized to directly bound the distribution and policy discrepancy, since and cannot be directly calculated or estimated. Admittedly, analyzing the distribution and policy discrepancy based on diffusion-generated data may be difficult and beyond the scope of this work. Maybe more discussion and explanation could help justify these assumptions.
Assumption 1 and 2 bound the function approximation error of diffusion model and inverse dynamics model, which depends on the implementation (e.g., network architecture, learning rate, etc.). As you mentioned, we also believe it may be out of the scope to directly analyze this type of error. Meanwhile, those assumptions bound the expectation of error instead of maximal error and they are also adopted by previous work [1,2]. Therefore, we argue that the assumptions are not overly idealized but necessary to derive theoretical analysis.
> W3: Directly excluding mismatched data from training datasets could be an another intuitive approach for SDM problem. So it would be better to discuss the performance if tempting and conservative data were directly excluded from the full datasets, utilizing the remaining datasets (i.e., Full dataset - Hybrid dataset) for training. A comparison between these two approaches (i.e., adding matched data vs. reducing mismatched data) would be valuable.
According to the reviewer's suggestion, we conduct the experiments with these datasets in the Ball-Circle task with threshold 40. The used hybrid, hybrid-R(rewarding only), hybrid-S(safe-only), and hybrid-Com(complementary) datasets are visualized in Figure R-4 presented in the supplementary PDF file. We can observe that when using the safe-only dataset, the learned policy is conservative with low cost and low reward. When using the rewarding-only dataset, the learned policy is tempting with high reward and high cost. When training with the complement of the hybrid dataset, we can get a better safe performance compared to using the hybrid dataset. However, since it still contains a large amount of imperfect demonstration with low reward, the reward performance is not satisfactory compared to our OASIS.
|Stats|OASIS|BCQ-Lag (hybrid)|BCQ-Lag (hybrid-R)|BCQ-Lag (hybrid-S)|BCQ-Lag (hybrid-Com)|
|-|-|-|-|-|-|
|Reward|$684.87\pm9.66$|$672.99\pm40.54$|$780.91\pm15.41$|$594.99\pm27.75$|$646.98\pm70.60$|
|Cost ($\kappa=40$)|$32.10\pm3.27$|$52.11\pm1.84$|$63.39\pm2.00$|$24.17\pm5.06$|$33.77\pm9.23$|
> Q1: What is the size of the generated dataset? I would like to confirm whether you directly train the policy on or a mixture of and the original datasets?
The number of transition pairs of the original tempting dataset and the generated datasets used in Table 1 is shown below. Each value presents the number of transition pairs.
||BallCircle|CarCircle|DroneCircle|BallRun|CarRun|DroneRun|
|-|-|-|-|-|-|-|
|full dataset|177200|435000|576243|94000|130200|395099|
|tempting dataset|126200|302100|426705|73900|29600|249432|
|OASIS generated $D_g$|62000|62000|155000|62000|62000|155000|
For OASIS, we directly train RL agents on the generated datasets.
> Q2: How do you calculate the reward condition based on the given cost?
We select the conditions with similar study in Figure 7 to find a good condition that has a high reward while sticking to the cost limit.
> Q3: Why are the thresholds in Figure 5 different for different types of datasets? Did you use the same threshold when constructing different types of datasets? This is a concern because one tempting dataset constructed under the thresholds 20 may be considered tempting for one threshold 20 but not for another threshold 60.
We change the thresholds in Figure 5 to show that our method can perform well under varying threshold conditions. We used different thresholds to construct different types of datasets to make them tempting/conservative/hybrid. The visualization of the dataset and corresponding threshold is also available in Appendix C.2.
According to the reviewer's suggestion, we also conducted experiments with the same thresholds on different datasets. The datasets and the experiment results are presented in Figures R-2 and R-3 in the provided PDF file. We can observe that when setting threshold=40, in all tested datasets including tempting, conservative, and hybrid, our method OASIS exhibits the best performance, achieving the highest reward among the safe agents.
> Q4: In Figure 6, how did you obtain the performance of baseline methods (e.g., CDT) under a specific $\alpha$? Did you use the data of size generated by OASIS for these baselines? It would be helpful to provide more details about the data efficiency experiments.
We apologize for any confusion. In Figure 6, $\alpha$ represents the size of the RL agent training dataset. For OASIS, it denotes the size of the generated data. A subsequent BCQ-Lag agent is trained on this generated dataset to obtain the safe RL policy. For the baseline methods, we create the training dataset by randomly sampling $\alpha\%$ of trajectories from the original dataset for RL training. In this experiment, we aim to demonstrate that, for offline safe RL, the agent can learn a good policy in a data-efficient manner if the dataset has minimal safe dataset mismatch (SDM) issues. OASIS offers a solution to shape the dataset distribution, which can reduce the required dataset size for RL training while maintaining good performance. We have added a detailed explanation and analysis of this experiment in the revision.
---
[1] Holden Lee, et al. Convergence for score-based generative modeling with polynomial complexity.
[2] Sitan Chen, et al. Sampling is as easy as learning the score: theory for diffusion models with minimal data assumptions.
---
Rebuttal Comment 1.1:
Comment: Thanks for the authors' response. The supplementary experiments and clarifications are comprehensive and help address my concerns. I maintain my score in favor of accepting the paper. | Summary: Offline Safe Reinforcement Learning (RL) is used to learn policies satisfying cost constraints from a given dataset. This proves to be a challenge when the dataset is biased in a certain way. This paper introduces a method to use the offline training dataset to capture the environment using a diffusion model that can generate data conditioned on our cost constraints and performance objectives. Experimental results show the approach generates data that better fit the constraints and outperform alternatives like dataset reweighting.
Strengths: - The paper is well organized and clearly shows the strengths of diffusion models to generate offline data with the given cost/performance objectives.
- Theoretical results show that the policies learned from the generated data are cost constrained given some reasonable assumptions on the model.
- Extensive comparisons are made to different SoTA baselines in offline RL with and without data generation.
Weaknesses: - Data generation and training can be a slow process due to the use of diffusion models which are known to be computationally heavy.
- Selecting hyperparameters such as number of denoising steps might vary results significantly. For example, Table 2 has greatly varying policy costs for different values of $K$ (albeit similar performances).
- While results are mostly consistent, it is hard to say when the proposed method prefers to act more conservatively with lower cost (or more riskily i.e., higher reward). This is reflected in the reward and costs in Table 1 (e.g., CarRun). A study on a hyperparameter change (apart from cost/return targets) to control this balance would be helpful.
Technical Quality: 3
Clarity: 4
Questions for Authors: 1. How are the initial states decided to generate the data from the diffusion model? Are they the same as the initial states of the training dataset or randomly sampled from the training dataset trajectories?
2. How long is inference time for the OASIS model i.e., data generation time? Is that included in the training time (L999, App C.3)?
3. How do we decide the “target” cost given the cost threshold to get the best performance? Is the study done in Fig. 7 required for each setting or is there a good heuristic to set these targets? How expensive is this study (Fig. 7) over target values (i.e., multiple inference runs)? On a related note, how were these targets set for Table 1 on OASIS and the compared baselines?
4. Is it right to say the primary reason for the success of the proposed method over the diffusion baselines (like FISOR), a more realistic conditional generative model that yields optimal generated data satisfying our cost constraints?
5. Are the learned labeling models (inverse dynamics, reward, and cost) only used for labeling the data generated from the diffusion model?
6. In Fig. 6, are all models using data generated by the same diffusion model? OASIS handles the diffusion model training and data generation with BCQ-Lag as the actual Offline RL policy learning. This makes me a little confused. How are the curves related in Fig. 6?
7. Why is hybrid an interesting dataset setting vs. full?
8. Typos:
- L224 (Sec 4.4) Theoretical
- L973 (App C.1) min instead of max
Confidence: 4
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: Diffusion models being computationally heavy (see Weakness)
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate the reviewer for the insightful suggestions and the praise of our theoretical results and experimental performance. We provide our response to the reviewer's comments and questions as follows:
> W1: Data generation and training can be a slow process.
We agree that generating new data could cause extra computational cost to the training process. However, since we focus on the offline RL setting, this extra cost only occurs before online inference, which we believe is still acceptable.
> W2: Selecting hyperparameters such as the number of denoising steps might vary results significantly. For example, Table 2 has greatly varying policy costs for different values of $K$.
For offline safe RL tasks, agents sometimes achieve similar rewards while obtaining different cost values when the cost limit is small and the cost signal is binary. The difference in cost performance in Table 2, where the threshold is set at $\kappa=20$, may come from this. Nevertheless, our method consistently outperforms the baselines in terms of safety constraint satisfaction and reward maximization.
We also tested our method with moderate thresholds ($\kappa=30, 40$) and show the results below. We observe that the cost performance of our method is more consistent under this setting. (All the values are normalized.)
**BallCircle ($\kappa=30$):**
|$K$|10|20|40|
|-|-|-|-|
|Reward|$0.793\pm0.014$|$0.777\pm0.017$|$0.791\pm0.010$|
|Cost|$0.877\pm0.182$|$0.857\pm0.123$|$0.970\pm0.099$|
**BallCircle ($\kappa=40$):**
|$K$|10|20|40|
|-|-|-|-|
|Reward|$0.809\pm0.007$|$0.783\pm0.015$|$0.808\pm0.014$|
|Cost|$0.845\pm0.121$|$0.740\pm0.059$|$0.819\pm0.041$|
**BallRun ($\kappa=30$):**
|$K$|10|20|40|
|-|-|-|-|
|Reward|$0.317\pm0.025$|$0.341\pm0.031$|$0.318\pm0.032$|
|Cost|$0.890\pm0.514$|$0.739\pm0.385$|$0.797\pm0.361$|
**BallRun ($\kappa=40$):**
|$K$|10|20|40|
|-|-|-|-|
|Reward|$0.374\pm0.019$|$0.392\pm0.047$|$0.364\pm0.011$|
|Cost|$0.896\pm0.137$|$0.825\pm0.336$|$0.963\pm0.176$|
> W3: While results are mostly consistent, it is hard to say when the proposed method prefers to act more conservatively with lower cost. A study on a hyperparameter change (apart from cost/return targets) to control this balance would be helpful.
In our method, the balance between reward and cost of generated data for the same model is only controlled by the input condition, i.e., the target reward/cost.
To further study the influence of other hyperparameters on the learned generative model, we provide more ablation experiments on the sequence length $L$. We use the Ball-Circle task with a threshold of $\epsilon=30$ and show results on the testing dataset in the following table. We find that our method is not sensitive to $L$ within the tested range.
|$L$|32|48|64|
|-|-|-|-|
|Reward|$0.799\pm0.014$|$0.803\pm0.005$|$0.785\pm0.014$|
|Cost|$0.890\pm0.132$|$0.926\pm0.090$|$0.798\pm0.070$|
> Q1: How are the initial states decided to generate the data from the diffusion model?
They are randomly sampled from the training dataset trajectories.
> Q2: How long is the inference time for the OASIS model? Is that included in the training time?
The generation time in all experiments is less than 1 minute with an A6000 GPU. Taking Ball-Circle as an example, we train the diffusion model of OASIS with a sequence length of $L=32$. During generation, we randomly sample 2000 states from the training dataset. After one-time generation, we obtain a dataset with $2000 \times (32-1) = 62000$ transitions. This process only takes 11 seconds. The generation time is not included in the training time presented in L999, App C.3.
> Q3: How do we decide the “target” cost given the cost threshold to get the best performance? Is the study done in Fig. 7 required for each setting or is there a good heuristic to set these targets? How expensive is this study (Fig. 7)? On a related note, how were these targets set for Table 1 on OASIS and the compared baselines?
We first set the “target” cost as the normalized cost threshold, then adjust the conditions according to the results in Figure 7 to find a good condition that has a high reward while adhering to the cost limit. It is a one-time inference for one set of conditions, which only takes about 15 seconds in total. Meanwhile, the performance of each condition is evaluated by the learned reward and cost and does not require online data. For CDT, which requires an additional reward condition, the reward conditions are adopted from the source code released by the authors.
> Q4: Is it right to say the primary reason for the success of the proposed method over the diffusion baselines (like FISOR), is a more realistic conditional generative model that yields optimal generated data satisfying our cost constraints?
In the offline RL setting, the distribution of the dataset matters a lot. If the data quality is low, i.e., the dataset is imbalanced and mostly contains unsafe or low-rewarding demonstrations, our OASIS method, which shapes the dataset distribution towards the target distribution, is more effective than baselines (e.g., FISOR) directly modeling the policy by conditional generative model.
> Q5: Are the learned labeling models (inverse dynamics, reward, and cost) only used for labeling the data generated from the diffusion model?
Yes, they are only used to label the data generated from the diffusion model.
(continued)
---
Rebuttal 2:
Comment: (continued)
> Q6: In Fig. 6, are all models using data generated by the same diffusion model? OASIS handles the diffusion model training and data generation with BCQ-Lag as the actual Offline RL policy learning. This makes me a little confused. How are the curves related in Fig. 6?
We apologize for any confusion. In Figure 6, $\alpha$ represents the size of the RL agent training dataset. For OASIS, it denotes the size of the generated data. A subsequent BCQ-Lag agent is trained on this generated dataset to obtain the safe RL policy. For the baseline methods, we create the training dataset by randomly sampling $\alpha\%$ of trajectories from the original dataset for RL training. In this experiment, we aim to demonstrate that, for offline safe RL, the agent can learn a good policy in a data-efficient manner if the dataset has minimal safe dataset mismatch (SDM) issues. OASIS offers a solution to shape the dataset distribution, which can reduce the required dataset size for RL training while maintaining good performance. We have added a detailed explanation and analysis of this experiment in the revision.
> Q7: Why is a hybrid dataset an interesting setting vs. a full dataset?
The hybrid dataset is more realistic in some real-world tasks. For example, in autonomous driving, we define the cost as the distance to the nearest surrounding obstacle and the reward as the time to arrive at the destination. In data collection, some conservative drivers achieve low cost but medium rewards. Some aggressive drivers achieve high reward but high cost. The combination of these two results in a hybrid dataset.
> Q8: typos.
We fixed these typos in our revised version and carefully checked the manuscript.
Title: Continued rebuttal
---
Rebuttal Comment 2.1:
Comment: I thank the authors for their response and have no further questions. | Summary: This paper proposes OASIS, which uses a conditional diffusion model to reshape the dataset distribution and achieve effective offline safe RL learning. Theoretical analysis gives the error upper bound of distribution reshaping and constraint violation upper bound. A large number of experiments show that the proposed algorithm has significantly improved compared to offline safe RL baselines.
Strengths: - The logic is clear and the paper is easy to understand.
- The theoretical analysis is sufficient.
- A large number of experiments prove the effectiveness of the proposed algorithm.
Weaknesses: ***W1:*** Some baselines are not introduced in related work, such as CVAE, FISOR, etc. This may cause difficulties in understanding.
***W2:*** Typos, such as line 173 "reweighing" -> "reweighting"; wrong citations, such as line 290 "COptiDICE [17]" -> "COptiDICE [16]"
Technical Quality: 2
Clarity: 2
Questions for Authors: ***Q1:*** Compared with CVAE, OASIS shows that the conditional diffusion model has a better ability to generate according to the condition information, as shown in Figure 7(c), but this example is a bit simple. Can the author show more comparisons of the two similar to Figure 7(c)? For example, add the OASIS generation results to Figure 8(c)?
***Q2:*** When showing the effectiveness of the newly generated dataset, in addition to showing the distribution of the generated state like Figure 7(c), it is also necessary to verify whether the annotations of the inverse dynamics model and reward & cost models are accurate. Can the author supplement the accuracy of these three models?
***Q3:*** By comparing the results of OASIS and CDT, I came to a conclusion: in offline safe RL, both methods conditioned on cost and reward, generating data is more effective than generating policy. I wonder if the author agrees with this conclusion? This conclusion seems to be uncommon in the field of offline RL, or it may be because I am not familiar with distribution shaping methods.
Confidence: 2
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: See the weaknesses and questions sections.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to express our gratitude to the reviewer for the valuable feedback. We are glad to know that the reviewer recognizes the clear logic, sufficient theoritical analysis, and experiments proving the effectiveness. We provide our response to the questions and concerns below.
> W1: Some baselines are not introduced in related work, such as CVAE, FISOR, etc. This may cause difficulties in understanding.
We thank the reviewer for pointing this out. We have added the following sentences to the related work section. FISOR identifies the largest feasible region where agents can operate safely, and optimizes for high rewards within this region and minimizes risks outside it with a diffusion policy. CVAE is an extension of the standard VAE that incorporates additional conditional information, such as class labels or attributes, into data generation. This enables more controlled and specific output generation.
> W2: Typos.
Thanks for your careful review. we fixed these typos in our revised version.
> Q1: Compared with CVAE, OASIS shows that the conditional diffusion model has a better ability to generate according to the condition information, as shown in Figure 7 c, but this example is a bit simple. Can the author show more comparisons of the two similar to Figure 7 c? For example, add the OASIS generation results to Figure 8 c?
We present additional comparisons in Figure R-1 of the supplementary PDF file. This figure illustrates the generation results of OASIS and CVAE under two conditions: medium-cost-high-reward and low-cost-medium-reward. While CVAE can reconstruct the trajectories, it fails to integrate conditions into the generation process. In contrast, OASIS successfully controls the generated trajectories, avoiding the restricted area when conditioned on low-cost-medium-reward.
> Q2: When showing the effectiveness of the newly generated dataset, in addition to showing the distribution of the generated state like Figure c, it is also necessary to verify whether the annotations of the inverse dynamics model and reward & cost models are accurate. Can the author supplement the accuracy of these three models?
We provided the scaled MSE loss values of inverse dynamics, reward and cost models in the following tables. We provide the inital value of the loss when the training begins for a comparison. The error of these models is small.
**Inverse dynamics**
||BallCircle| CarCircle | DroneCircle | BallRun | CarRun | DroneRun |
|- | - | - | - | -| - | - |
|init | 1.08 | 1.04 | 1.58 | 0.68 | 1.14 | 1.03 |
|final | 0.037 | 0.18 | 0.041 | 0.056 | 0.26 | 0.015 |
**reward model**
|| BallCircle| CarCircle | DroneCircle | BallRun | CarRun | DroneRun |
|- | - | - | - | - | - | - |
|init | 1.86 | 7.89 | 1.54 | 3.96 | 1.96 | 1.92 |
|final | 0.009 | 0.05 | 0.004 | 0.013 | 0.002 | 0.017 |
**cost model**
|| BallCircle| CarCircle | DroneCircle | BallRun | CarRun | DroneRun |
|- | - | - | - | - | - | - |
|init | 5.59| 5.29 | 5.41 | 4.09 | 4.42 | 5.80 |
|final | 0.13 | 0.15| 0.15 | 0.32 | 0.38 | 0.27 |
> Q3: By comparing the results of OASIS and CDT, I came to a conclusion: in offline safe RL, both methods conditioned on cost and reward, generating data is more effective than generating policy. I wonder if the author agrees with this conclusion? This conclusion seems to be uncommon in the field of offline RL.
In the offline RL setting, the distribution of datasets matters a lot. If the data quality is low, i.e. the dataset is imbalanced, and most contain unsafe or low-rewarding demonstrations, our OASIS, which shapes the dataset distribution towards the target distribution, is more effective than directly generating safe policy by conditional sequential modeling such as CDT. The importance of distribution shaping for imbalanced dataset in offline RL has also been discussed in some related works [1, 2], where the authors proposed sampling strategies to solve this issue.
We acknowledge that the performance comparison between general policy generation and data generation in offline safe RL remains an open question. This represents a fascinating area for future research.
---
[1] Hong, Zhang-Wei, et al. "Beyond uniform sampling: Offline reinforcement learning with imbalanced datasets." NeurIPS 2023.
[2] Hong, Zhang-Wei, et al. "Harnessing mixed offline reinforcement learning datasets via trajectory weighting." ICLR 2024.
---
Rebuttal Comment 1.1:
Title: Response to Authors
Comment: Thank you for your response, which has addressed most of my concerns. However, I still have a few additional questions:
---
Supplement to ***Q1***: As I mentioned earlier, the example of Car-Circle seems somewhat simplistic. If time permits, could you please add the generation results of OASIS to Figure 8c?
Supplement to ***Q3***: Yes, I agree that the conclusion you provided is more accurate and reasonable.
Based on the current discussions, I will at least maintain my score.
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer for the useful and constructive comments. We answer the additional questions as follows.
> Supplement to Q1: As I mentioned earlier, the example of Car-Circle seems somewhat simplistic. If time permits, could you please add the generation results of OASIS to Figure 8c?
- Figure 8c shows the CVAE reconstruction results for the Car-Circle task, supplementing Figure 7c. During the rebuttal phase, we added the OASIS generation results to Figure 8c as requested by the suggestions:
> "_Q1: Compared with CVAE, OASIS shows that the conditional diffusion model has a better ability to generate according to the condition information, as shown in Figure 7c, but this example is a bit simple. Can the author show more comparisons of the two similar to Figure 7c? For example, add the OASIS generation results to Figure 8c?_"
- The Figure R-1 we provided in the rebuttal phase is the similar generation results of OASIS to those of CVAE shown in Figure 8c. To make a clear visualization, we (1) reduce the number of trajectories; (2) add different conditions for generation; and (3) visualize the generation results from OASIS and CVAE in separate figures. The random seeds to sample trajectories are different, so the targets for reconstruction are slightly different in Figure R-1 and Figure 8c. However, we keep the same set of trajectories for reconstruction for OASIS and CVAE to make a clear and fair comparison in Figure R-1.
- We select the Car-Circle task for this visualization experiment because (1) it is widely used in offline safe RL benchmark [1] and related offline safe RL works [2, 3, 4]; (2) its state space contains the position information of ego agent, which is easy to visualize in 2D space.
- For the request to visualize other tasks, we will update the visualization results for other robots (i.e., Drone) that have high-dimensional observation and action space and complicated dynamics models. Since we can not update the PDF file at this stage, we will include these in the appendix of the revised manuscript.
> Supplement to Q3: Yes, I agree that the conclusion you provided is more accurate and reasonable.
We thank the reviewer for the agreement and acknowledgment. We have added related discussions in our revised manuscript.
---
[1] Zuxin Liu, et al. "Datasets and benchmarks for offline safe reinforcement learning." arXiv preprint arXiv:2306.09303 (2023).
[2] Yinan Zheng, et al. "Safe Offline Reinforcement Learning with Feasibility-Guided Diffusion Model." ICLR 2024
[3] Zijian Guo, et al. "Temporal Logic Specification-Conditioned Decision Transformer for Offline Safe Reinforcement Learning." ICML 2024.
[4] Kihyuk Hong, et al. "A primal-dual-critic algorithm for offline constrained reinforcement learning." AISTATS 2024. | null | null | Rebuttal 1:
Rebuttal: Dear Reviewers,
We thank you all for your careful review and valuable feedback. In addition to addressing each reviewer’s comments, we would like to highlight the new examples and experiments during the rebuttal phase. The figures we refer to can be found in the attached PDF file.
1. **Additional Visualization of Comparison Between OASIS and CVAE**
In response to reviewer CUpS, we present additional comparison results of trajectories generated by OASIS and CVAE in Figure R-1.
2. **Accuracy of the Learned Inverse Dynamics, Reward, and Cost Models**
In response to reviewer CUpS, we present the accuracy of these models.
3. **Additional Ablation Experiments**
In response to reviewer fA23, we present more results and analysis of our ablation study.
4. **Additional Experiments for Distribution Shaping Methods**
In response to reviewer XmVn, we conduct experiments to show the performance of the distribution shaping method by adding matched data and reducing mismatched data.
5. **Additional Experiments with Different Types of Datasets**
In response to reviewer XmVn, we construct different types of datasets (tempting/conservative/hybrid) under the same threshold and provide supplementary experiments.
We sincerely appreciate your time, attention, and valuable feedback.
Best regards,
Authors of Submission 12456
Pdf: /pdf/06ed3f558319a18b240b62008df210a7b3fcb866.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
A theoretical case-study of Scalable Oversight in Hierarchical Reinforcement Learning | Accept (poster) | Summary: The paper studies the scalable oversight problem in the goal-conditioned hierarchical reinforcement learning setup. The starting point of the paper is that, if the time horizon $H$ is large, there is only a limited amount of feedback that can be given (authors use an example of essay or code writing, which requires the labeller to read through and provide feedback on the whole output, which is is very expensive). Thus, it is crucial to leverage hierarchical feedback (e.g. judging one paragraph, or one helper function, or the high-level plan). Given a tabular MDP, the task of finding a good policy is decomposed into two levels:
- a high-level policy $\pi^h$ that computes, in a state $s$, a high-level action $a^h$ which in turn gives a goal $g(s,a^h) \in S$ and a sub-MDP $M(s, a^h)$ that operates on a subset of states $S_{s, a^h} \subseteq S$, but has access to all actions $A$
- a set of low-level policies $\pi_{s,a^h}$, for each state $s$ and high-level action $a^h$, which attempt to reach a goal $g(s,a^h)$ (almost surely) while collecting as much reward on the way as possible.
In the first part of the paper, the feedback is assumed to be cardinal. The goal function is assumed to be given and fixed. The task studied is then to find a high-level policy and a set of low-level policies, which give close-to-optimal return. This is done by employing a two-level UCB-VI procedure, where high-level actions correspond to learning UCB over sub-MDPs. The reward feedback is then only given for the low-level trajectories of length $H_l$ (since the high-level reward is computed by adding low-level ones).
In the second part of the paper, the feedback is assumed to be ordinal, and to follow BTL model. Authors point out that there is a subtle issue with the low-level feedback not always being sufficient for no-regret learning, and high-level feedback depending on what the labeller believes the low-level polices will be (e.g. whether they judge wrt to optimal or actual low-level policies). They develop a hierarchical preference-learning algorithm H-REGIME and analyse its regret in all three cases discussed above.
Strengths: The paper studies an important problem of scalable oversight, and approaches it from a perspective that could potentially be used as a basis for a practical implementation, both in ordinal and cardinal setup, at least if the problems mentioned in Weaknesses are resolved. The fundamental idea of doing a hierarchical UCB-VI is quite simple, but computing the right form of the bonus, and the insight to operate in the goal-conditioned setup (leading to a particular form of low-level reward functions and simplifying the theory quite a lot) seem novel and non-trivial. The paper looks to be technically sound, and although I did not carefully check very technical proofs of various inequalities, which span 30 pages in the appendix, very detailed notes make it possible to spot-check it without issues.
Weaknesses: My main question/objection is about the H-UCB-VI algorithm. For shared learning, the algorithm involves assigning the chosen sub-MDP to an appropriate cluster $C(s,a)$. This means that the transition probabilities and rewards must be known for the whole MDP (well, for all sub-MDPs). But if this is the case, then there's no need for using UCB - we can use VI directly. Moreover, the paper improves over UCB-VI only in case there are many (small) isomorphic sub-MDPs, which makes the dependence on shared learning crucial for this to be useful. (Moreover, the motivating example used throughout the paper is given to be essay writing, which does not seem to enjoy this property. Other than that, authors do not give any other examples, nor do any concrete example calculations - even in the appendix - which makes it more difficult to understand and appreciate the work.) This is where I might be fundamentally misunderstanding the work - please correct me if I'm wrong, this is the main reason for my low rating.
In general, it was very unclear to me what exactly is the influence of the goal function on the whole setup - see questions below. The discussion was limited to a few lines in "Goal Selection" paragraph, but I did not gain a good intuition the interplay between the high-level policy performance, and the goal function. (For example, how much work is good goal function doing when comparing to the baseline UCB-VI.)
Technical Quality: 3
Clarity: 2
Questions for Authors: Questions:
- The definition of the MDP in the introduction does not contain a reference to high-level actions. How do they fit into the definition, formally?
- Why are states for sub-MDPs indexed by high-level action as well? Why are all actions $A$ available in sub-MDPs? What if an action leads outside of $S_{s,a}$?
- In what way do results depend on the goal function? Can it be quantified? Is it possible to derive $W(g)$ of the value of the optimal policy wrt to this goal function, and say something about the sensitivity of of this $W$ on the parameter? How would goal functions be derived in a realistic setup? (Those questions are probably somewhat, or even significantly, out of scope of the paper, but I'd appreciate if the authors provided at least some preliminary answers, since the practicality and a lot of the value of this development hinges on this problem).
- What are the policy classes $\Pi^h$ and $\Pi_{s,a}$? They appear to be assumed in the definitions, but are not introduced otherwise.
- I do not know the REGIME algorithm, but I could not find it in the provided reference?
- Does the approach discussed in the paper extend to more than two levels of hierarchy?
- What is $S_h$ in line 217/218?
- Algorithm 2, line 12 - shouldn't it say length H^h trajectories?
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you very much for your detailed review, reviewer z273! These are thoughtful questions and we address the main concerns below due to the space limit. We will be sure to clarify the notation (e.g. $\Pi^h$) as you suggest.
```
…The task studied is then to find a high-level policy and a set of low-level policies…This is done by employing a two-level UCB-VI procedure…
```
Thanks for the thorough summary! Just to clarify, H-UCB-VI does not have to use UCB-VI as a subroutine. Any no-regret subroutine will do (L177). For instance, if sub-MDPs are linear, one can use LSVI. If sub-MDPs are linear and share structure, one can use more specialized algorithms like [11] for better guarantees (L220-2).
```
My main question/objection is about the H-UCB-VI algorithm. For shared learning, the algorithm involves assigning the chosen sub-MDP to an appropriate cluster. This means that the transition probabilities and rewards must be known for the whole MDP… But if this is the case, then there's no need for using UCB - we can use VI directly. Moreover, the paper improves over UCB-VI only in case there are many (small) isomorphic sub-MDPs, which makes the dependence on shared learning crucial for this to be useful.
```
Thanks for this question:
1. Firstly, the assumption from [25] is that we know which sub-MDPs share the same transitions and belong to the same cluster. It does not mean we know the transitions of the MDPs in the clusters, as this would indeed trivialize the problem (and the results of [25]).
2. Next, the concern is that the paper improves upon UCB-VI when there are many isomorphic sub-MDPs. Our goal in providing this corollary (of our main result) is simply to emulate the analysis of [25], which makes this assumption to show the statistical efficiency of HRL. We do not claim that this assumption is widely applicable. Rather, this corollary serves a “sanity check” that our hierarchical algorithm *is* more efficient in a setting where it should be (L211-213).
3. Our main result in this section is that HRL reduces to multi-task, sub-MDP regret minimization. This result allows one to flexibly leverage shared structure, beyond the cluster assumption, to improve the regret. Please see our answer above.
If it improves the presentation of the paper, we can move the corollary to the appendix and not mention the clusters to avoid confusion, as this corollary is not central to the paper.
```
…authors do not give any other examples, nor do any concrete example calculations - even in the appendix - which makes it more difficult to understand and appreciate the work.)
```
Yes, we can definitely provide more examples to aid understanding! A canonical example in HRL is the maze (e.g. [22]). A maze consists of rooms with doors. The goal is to get to the exit in as few steps as possible.
1. For the global MDP, $S = S^h \times S^l$ where $s^h$ denotes the index of the current room, and $s^l$ denotes the position of the agent in the room. Action set $A$ consists of moving (L, R, U, D, Stay).
2. High-level MDP: high-level action $A^h$ consists of moving to the (N, S, E, W) door of the room. $s$ is the current location of the agent, and $g(s, a^h)$ maps the goal (door) to its location.
3. Low-level MDP: has state space $S^l_{s,a} \subset S$ and the action set $A$ is the same moving (U, D, L, R, Stay).
```
The definition of the MDP in the introduction does not contain a reference to high-level actions. How do they fit into the definition, formally?
```
The high-level action $A_h$ is usually set based on prior knowledge. We describe what it is in the essay example (L63-5). $A_h$ does not have to be related to $A$ in the global MDP definition. For example in the maze case, moving to the N,S,E,W door is a wholly different action set from moving in the U,D,L,R direction.
```
Why are states for sub-MDPs indexed by high-level action as well? Why are all actions $A$ available in sub-MDPs? What if an action leads outside of $S_{s,a}$?
```
The high level action is needed to determine the goal state of the sub-MDP, which defines the sub-MDP reward. The action set is defined to be $A$ in sub-MDPs, as is commonly assumed in GC-HRL literature. An action cannot lead to a state outside of $S^l_{s,a}$, because by definition, $S^l_{s,a}$ is the set of all reachable states (L60).
```
In what way do results depend on the goal function? Can it be quantified?…How would goal functions be derived in a realistic setup?
```
Thanks for this question, we agree that our learned policy is only as good as the goal function chosen (L224-5), which is key to the success of GC-HRL:
1. As we write on L29-32, there are already many settings of interest where we *have* prior knowledge of a good hierarchy/goal function. This is because we humans have often (and successfully) taken the hierarchical approach to build up to and produce these long-form creations. So we know what are good goals to set e.g. we write essays by first writing an outline of arguments, then expanding out each point in the outline.
In such settings, the algorithms we develop can already help to scale up bounded feedback and enable scalable oversight.
2. Indeed, this approach of explicitly encoding prior knowledge in the learning algorithm is common in both GC-HRL literature (e.g. we know apriori mazes consist of rooms [22]) and scalable oversight literature (e.g. books consist of chapters [27]).
3. Outside of such settings, we agree it is an open problem to learn apt hierarchical decompositions/goals (L329-30). It is an exciting direction that could realize end-to-end scalable oversight [15].
```
Does the approach discussed in the paper extend to more than two levels of hierarchy?
```
Our algorithm is applicable to any number of levels of hierarchy due to the reduction of HRL to multi-task, sub-MDP regret minimization. This allows for the case when each sub-MDP is hierarchical as well, and one can invoke H-UCB-VI calling H-UCB-VI as the subroutine.
---
Rebuttal Comment 1.1:
Title: Response
Comment: I thank the authors for their thorough response. I think all of the minor points and questions are satisfactorily handled, except I still don't understand how exactly the authors want the set $A_h$ to be formally defined (e.g. [25] introduces a similar notion formally in Definition 1 using a notion of partition of states MDP states).
However, I am still unconvinced by the assumption that clustering function is available. I admit it isn't literally implying that transition probabilities and rewards are known, but at the same time, I have hard time imagining any semi-realistic situation in which one does not imply the other. In full RL, the agent doesn't know which situation (which sub-MDP) it found itself in, which would necessitate querying the clustering function online, and that in turn would imply knowing $\tau$ and $R$. Indeed, the running example of essay writing is still not showing that, and the maze environment is very simplistic.
As far as I see, [25] does not discuss the feasibility of the assumption about the partition being known (agent having access to the clustering function, in the terminology here), but a large amount of their analysis is in the context of planning, where the full structure of MDP is known anyway.
Of course, it is possible that the algorithm would still work as expected if the clustering was only approximate - but the paper does not say anything about that.
I am therefore maintaining my rating for now.
---
Reply to Comment 1.1.1:
Title: Reply to Reviewer z273
Comment: Thank you for getting back to us!
```
I am still unconvinced by the assumption that clustering function is available. I admit it isn't literally implying that transition probabilities and rewards are known, but at the same time, I have hard time imagining any semi-realistic situation in which one does not imply the other.
```
Firstly, we wish to emphasize that this assumption of the known clustering function is *only* used in this paper to derive Corollary 1, to do the "sanity check" comparison. This assumption is *not* central to the paper's main results. Outside of Corollary 1, removing this assumption does not affect any other result. This is because overall, our work is about scaling up bounded feedback via HRL, and not comparing the statistical efficiency of HRL vs RL (as in [25]). We will be sure to clarify this in our revision and we apologize for the confusion.
Secondly, as we write in our previous response, we agree with you that the cluster assumption from [25] is quite specific/strong. When we remove the assumption and the cluster function is unknown, we can simply have each sub-MDP be its own cluster. In such settings, our main result in section 3 (Theorem 1) is still useful as it can capture improvements in regret from other general forms of shared learning, since we have shown learning reduces to multi-task, sub-MDP regret minimization.
One example of this is [11] applicable to the linear sub-MDP setting (L221-222), which makes use of the low-rank and not the cluster assumption. This is perhaps another point of difference between our paper and [25] in that we formalize the reduction to multi-task learning, which demonstrates improvements in the regret bound from other forms of shared learning.
```
In full RL, the agent doesn't know which situation (which sub-MDP) it found itself in
```
As a point of clarification, in our setting, the agent *does* know which sub-MDP it is in. sub-MDPs are defined by the state and high-level action, both of which are known to the agent. Presumably, you meant to write "sub-MDP cluster" here?
```
...except I still don't understand how exactly the authors want the set $A_h$ to be formally defined (e.g. [25] introduces a similar notion formally in Definition 1 using a notion of partition of states MDP states).
```
Thank you for this! The definition you point to in [25] is specific to a particular class of hierarchical MDPs. We chose not to define $A_h$, because it is really dependent on the MDP's hierarchical structure. But we agree with you that for concreteness it helps to have a formal definition. Following your suggestion, we will cite their definition as an example of how $A_h$ can be formally defined to model a certain class of hierarchical MDPs (e.g. mazes).
Thank you again for taking the time, and please do not hesitate to follow up if you have any further questions! We appreciate it.
---
Rebuttal 2:
Title: Reply to Reviewer z273
Comment: Thanks for continuing to engage with us! We are happy to go over this, and will be sure to correct the typo as you suggest, thanks.
```
Theorem 1 analyses the behavior of Algorithm 1, and shows that the regret grows not faster than (number of clusters)*(upper bound on regret in a cluster).
```
More precisely, it is the sum of the upper bound on the regret of the clusters, which in settings with shared structure between sub-MDPs can be better than (number of clusters)*(upper bound on regret in a cluster).
```
As the authors claim above, Algorithm 1 can work with a clustering function that is a refinement of the true clustering function (that is, the clusters are more fine-grained).
```
That is right, Algorithm 1 provides a regret guarantee for any cluster function that is correct.
Our previous point is that even in settings where we take each sub-MDP to be its own cluster (the most refined clustering if you'd like), improvements in the regret bound are still possible e.g. due to low-rank structure.
```
However, as far as I understand, the bound uses the $C$ that is actually used in the algorithm, and not the (unknown) "ground-truth" clustering given by Definition 2.
```
This is correct. When the cluster function is unknown, the algorithm will use the one where each sub-MDP is its own cluster (and not the unknown, best cluster function).
```
In particular, if we assume no prior knowledge of the clustering structure, the bound becomes proportional to $|A_h|\cdot |S|$... my reading was that, in this case, it was trivialising the bound of Theorem 1. (I.e. if the hierarchical structure is a prior unknown, then we can do no better than a normal non-hierarchical approach).
```
That’s right, without prior knowledge, we do not do better than the non-hierarchical algorithm in terms of statistical complexity, as you write. With that said, Corollary 1 emulates the comparison done in [25]. In [25], it is assumed this cluster function is *known* and the HRL algorithm is proven to be better under certain conditions. We do the same here, also assuming the cluster function is known (with no endorsement of how realistic this assumption is). We agree with you it would be interesting future work to design an algorithm that can cluster on the fly.
Finally, and importantly, the comparison above is in terms of statistical efficiency. Under our assumption of bounded feedback, the non-hierarchical approach is actually *not applicable* here, due to its excess trajectory length of $H_h H_l$ (L54). Hence, we believe Algorithm 1 is not trivial, but rather a useful contribution that can achieve no-regret while learning from bounded cardinal feedback.
Again, thank you for your time and please let us know if there are any other questions!
---
Rebuttal Comment 2.1:
Title: Response
Comment: I again applaud the authors for engaging with me on this.
I think the last exchange validates the key points of my understanding of the theorem. So I remain convinced about the centrality of the known-clustering assumption, via the chain "Unknown clustering" -> "Algorithm 1 uses trivial clustering" -> "Theorem 1 proves trivial bound about its regret" -> "Corollary 1 doesn't improve on the baseline". Or, more straightforwardly, if the learner doesn't know that sub-MDP A is isomorphic to a sub-MDP B, but they indeed are, presented results are not impactful in any way.
At the same time, I agree that, introducing the assumption about the feedback being limited to $H_l, H_h$ is an important (but, I think, a bit separate, point), which makes the contribution valuable on its own, so I decided to increase my score to recommend acceptance.
---
Reply to Comment 2.1.1:
Title: Reply to Reviewer z273
Comment: Thank you for your timely response and update! To wrap up our discussion, we have two more quick points to share if we may.
```
So I remain convinced about the centrality of the known-clustering assumption, via the chain "Unknown clustering" -> "Algorithm 1 uses trivial clustering" -> "Theorem 1 proves trivial bound about its regret" -> "Corollary 1 doesn't improve on the baseline".
```
This is exactly right. To confirm, without this assumption, we cannot derive Corollary 1. The clustering assumption is central to the favorable comparison result (and that of [25]).
```
if the learner doesn't know that sub-MDP A is isomorphic to a sub-MDP B, but they indeed are, presented results are not impactful in any way.
At the same time, I agree that, introducing the assumption about the feedback being limited to $H_l, H_h$ is an important (but, I think, a bit separate, point), which makes the contribution valuable on its own
```
Just to offer our understanding,
1. Under the “trivial clustering”, Theorem 1 leads to some no-regret guarantee. In a vacuum, we completely agree with you that this guarantee is not impactful. It doesn’t improve upon the baseline (non-hierarchical algorithm), which one can readily use.
2. However our previous point was that under the premise of the paper (limited feedback), the regret guarantee, even under the “trivial clustering”, does become *more* meaningful. This is because the baseline is no longer applicable. And so, we think having an algorithm that can achieve *some* no-regret guarantee is a step forward, in the context of scalable oversight.
Lastly, we definitely agree that the algorithm can be improved, since it doesn’t do online clustering. It can only leverage certain types of multi-task structure, but not all. We will highlight this as a key direction for future work.
Please let us know if there are more questions, and thank you again for your thorough engagement during the process! It has definitely helped to improve our paper. | Summary: Provides proofs of the regret bounds for cardinal and ordinal feedback using the sub-MDP framework of goal-conditioned HRL. Sub-MDPs are defined by a starting state, fixed horizon, high-level action and subspace. The high-level policy selects transitions between sub-MDPs. This work first proposes an upper confidence bound algorithm based on lower-level reachability and selection, then proves the regret of this algorithm.
Strengths: The method provides clear and applicable analysis of hierarchical goal-conditioned RL, even outside of the context of feedback.
The derivation of the regret due to goal-conditioned HRL is informative and interesting.
The proposed insight of ensuring hierarchical consistency is meaningful for many applications of GCHRL.
Weaknesses: Algorithm 1 is somewhat difficult to follow. In particular, while the components make sense, the core insight of the bounds on the lower-level goals is difficult to locate.
The regret analysis proof sketch could provide more insight. In particular, the separation of which components occur because of the high-level and low-level errors, which introduce the product of horizons, is not intuitively clear.
The work for ordinal feedback is not particularly well contained, since it relies on the properties of REGIME, which are not made clear in this work. In particular, it is not obvious without looking into that work where the sub-policy simulator fits in.
In many ways this seems less to be a work about the nature of scalable feedback, and simply on the nature of hierarchical RL in providing regret bounds. This subject is of interest regardless but calls to question the framing of this work.
Technical Quality: 3
Clarity: 3
Questions for Authors: Can the work on regret analysis of GCHRL be separated from the analysis of ordinal rewards?
What is the relationship between the analysis of these two?
Confidence: 2
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: The analysis of other components of HRL, such as goal sampling, learning procedure, etc. are not that well captured, though this is typically challenging to analyze theoretically.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you very much for your detailed review, reviewer poPw! These are great questions, which have definitely helped to improve our paper and its presentation.
```
Algorithm 1 is somewhat difficult to follow. In particular, while the components make sense, the core insight of the bounds on the lower-level goals is difficult to locate.
The regret analysis proof sketch could provide more insight. In particular, the separation of which components occur because of the high-level and low-level errors, which introduce the product of horizons, is not intuitively clear.
```
Thank you for this question! To answer your question, the regret is *only* a function of low-level error in the cardinal case. Hence, one of our main results is that GC-HRL reduces to multi-task, sub-MDP regret minimization. In our proof sketch, we provided an outline of the errors terms that contribute to regret. More specifically:
1. $\rho_h^k$: regret due to low level policy incurring sub-optimal returns.
2. $\gamma_h^k$ and $\sigma_h^k$: regret due to sub-optimal low-level policy not reaching the goal state.
The product of horizons $H_h H_l$ shows up in the low-level regret when low-level policies miss the goal. Indeed, when the goal state is not reached, it is unclear which state the agent would be in. Thus, the maximal difference between any two returns, $H_h H_l$, is used as an upper bound on the resultant regret.
```
The work for ordinal feedback is not particularly well contained, since it relies on the properties of REGIME, which are not made clear in this work. In particular, it is not obvious without looking into that work where the sub-policy simulator fits in.
```
Thank you for this nice feedback! Our current algorithm includes the full details of REGIME. We agree with you that a more modular description of Algorithm 2 would improve the presentation. We will be sure to include this in our next revision. Here it is with reference to line numbers:
> 1. Invoke one copy of REGIME *across* all sub-MDPs with shared exploration (L2-4) and learned reward (L6).
> 2. Compute near-optimal, sub-MDP policies $\pi^{N_l}_{s,a}$ that the high level policy will invoke (L7).
> 3. Invoke one copy of REGIME for the high-level MDP, where the feature expectation of each sub-MDP is computed according to $\pi^{N_l}_{s,a}$ and $P^{\epsilon’}$.
We arrived at Algorithm 2 by improving upon the naive application of REGIME to the hierarchical setting, with the improvements as describe on L295-307.
Finally, as for the role of the simulator, the REGIME algorithm assumes access to a simulator $P^{\epsilon’}$ to compute policy feature expectations. We do the same when invoking REGIME, denoting the feature expectation with notation $\phi^{P^{\epsilon’}}(\pi)$ in our algorithm.
```
In many ways this seems less to be a work about the nature of scalable feedback, and simply on the nature of hierarchical RL in providing regret bounds. This subject is of interest regardless but calls to question the framing of this work.
```
To clarify, our work does shed some light on the nature of scalable feedback. We show that one way to generate scalable feedback is by leveraging hierarchical structure (L27-29). Our work analyzes how to scale feedback, and develops HRL algorithms that efficiently learn with provable guarantees.
A general point that our paper makes is that besides the benefits of easier credit assignment and exploration (as listed in [25]), HRL has the added benefit of allowing for scalable oversight.
Certainly, we agree that our work does not provide a full characterization of “the nature of scalable feedback”, as you write. We study one natural setting (hence a “case-study” as in our title), the hierarchical setting, where we show we can provably scale up bounded feedback.
This may not be the only setting that allows for scalable feedback. And discovering other types of scalable feedback and analyzing their nature is verily unexplored (and exciting) territory.
```
Can the work on regret analysis of GCHRL be separated from the analysis of ordinal rewards?
What is the relationship between the analysis of these two?
```
Thanks for this question! The key difference between the two analyses is that in the cardinal case, the total regret decomposes into the sum of sub-MDP regret (Theorem 1). In the ordinal case, the total regret decomposes into both high-level MDP regret and sub-MDP regret. As we show in Proposition 1, this is because we need to not only learn from comparison which policies are best within sub-MDPs (incurring low-level regret), but also which sub-MDPs yield the highest returns across sub-MDPs (incurring high-level regret). This is made explicit starting from the third line of the proof of Theorem 4 (L645 in the appendix), and results in a different type of analysis.
The key commonality underlying the two analyses is that an apt sub-MDP reward design is needed to incentivize goal-reaching *in balance* with maximizing the return of the sub-MDP (L172-175). This we believe is one of our main contributions, in finding the “right” reward setting and weighting to balance the two objectives. It is also what allows us to derive a new form of bonus that trades off between the two (L176-8), such that we can bound the resultant cumulative regret w.r.t. the global MDP.
---
Rebuttal Comment 1.1:
Title: Response to authors
Comment: I appreciate the detailed response and believe the work should be accepted, and will maintain my score.
---
Reply to Comment 1.1.1:
Title: Reply to Reviewer poPw
Comment: Thank you for getting back to us, and we appreciate your helpful feedback! | Summary: This paper analyzes scalable oversight in the context of goal-conditioned hierarchical reinforcement learning. Specifically, it theoretically shows that it is possible to efficiently use hierarchical structure to learn from bounded human feedback.
Strengths: * The problem is significant and of high importance to the community.
* The theoretical analysis and sub-MDP reward design for Hierarchical-UCB-VI appears novel. Furthermore, the extension to ordinal (preferences) feedback appears novel, specifically in the proposed Hierarchical-REGIME algorithm and analysis.
Weaknesses: * Minor: It would be a nice addition to the paper to empirically demonstrate Algorithm 1 &2 as well.
* Minor: Clarity: Paper had quite a few typo’s. I would encourage the authors to use an automated tool to check for spelling / grammar mistakes.
Typos:
* L61: “policies aims to” -> “policies aim to”
* L183: “approach is avoid” -> “approach is to avoid”
* L218: “identifical” -> “identical”
* L250: “what can we assume the” -> “what can we assume is the”
Technical Quality: 3
Clarity: 2
Questions for Authors: * Does the analysis for the linear setting generalize to the non-linear / function approximation setting?
Confidence: 2
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: Yes they are discussed in Section 5.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your review, reviewer Hzkp! It has definitely helped to improve our paper and its presentation.
```
Paper had quite a few typo’s. I would encourage the authors to use an automated tool to check for spelling / grammar mistakes.
```
Thank you for your careful reading! We will be sure to correct these.
```
Does the analysis for the linear setting generalize to the non-linear / function approximation setting?
```
Thanks for this question! Our paper invokes REGIME as a subroutine, taking its guarantees as a given. REGIME is designed with the assumption of linear reward, and we believe more work is required to extend it to the non-linear setting.
With that said, linearity is an often used assumption in offline RLHF literature [29,30]. The justification for it is that we often do not have to learn features $\phi$. Such features may already be available from (self-supervised) pre-training. Thus, for reward modeling, it is common to assume a linear model on top of non-linear features $\phi$ to do reward modeling.
Also, we note that the linearity assumption pertains to the specifics of the subroutine invoked (in this case REGIME). The insight from our analysis of favoring low-level learning given sufficient coverage applies regardless.
---
Rebuttal Comment 1.1:
Comment: Thank you for your detailed rebuttal response. I have raised my score.
---
Reply to Comment 1.1.1:
Title: Reply to Reviewer Hzkp
Comment: Thank you for getting back to us, we appreciate it! | Summary: In this paper, the authors study how to scale human feedback, in the context of goal-conditioned hierarchical reinforcement learning. For this work, the authors assume that humans can only provide feedback for outputs with length below a certain threshold. Thus, it is necessary to scale the little feedback provided, in order to improve the complete output. This means, for example, that a human can provide feedback about the high-level actions taken in the hierarchy, but not the detailed steps of the low-level policies. The authors propose two algorithms: one that learns from low-level feedback only, and one that learns by incorporating human preference over high-level trajectories. The authors also present regret bounds for the proposed algorithms.
Strengths: The paper studies a combination of very important topics: theoretical understanding of hierarchical reinforcement learning and the outcomes of incorporating human feedback. I believe that the field still lacks a strong theoretical understanding of HRL, and this paper provides a step in that direction.
The paper is generally very well written and easy to follow. The authors clearly state the setting they are dealing with, provide a useful running example, and intuitive discussions about their theoretical results.
Weaknesses: The results are based on a very strong assumption that there is a well-defined goal function available (that is, a goal function that only proposes feasible goals). It is very hard to guarantee goal feasibility, so assuming that this function is given is very ambitious. However, the authors do recognize the limitation of this assumption and propose, on a high-level, how this could be dealt with in future work.
When discussing the bounds related to the second algorithm (Section 4), it was necessary to introduce several constants, but they were not discussed in more detail, providing, for example, intuitions on the magnitude of such constants. Without it, it is more challenging to understand the significance of the bounds.
Technical Quality: 3
Clarity: 3
Questions for Authors: There were a few variables/subscripts used throughout the paper that, I believe, were not formally introduced. As some examples, $a$ on line 60, $h$ and $i$ on the Interaction Protocol paragraph (no line numbers here), $l$ on line 67, $\bar{r}$ on line 179, $N(s,a)$ on line 194, $d$ on Lemma 4. Could the authors please add clarifications about these variables?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors discuss the limitations of the work. No major concerns.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you very much for your detailed review, reviewer 2zMJ! It has definitely helped to improve our paper and its presentation.
```
The results are based on a very strong assumption that there is a well-defined goal function available (that is, a goal function that only proposes feasible goals). It is very hard to guarantee goal feasibility, so assuming that this function is given is very ambitious.
However, the authors do recognize the limitation of this assumption and propose, on a high-level, how this could be dealt with in future work.
```
Thanks for this question, we agree that our learned policy is only as good as the goal function chosen (L224-5), which is key to the success of GC-HRL:
1. As we write on L29-32, there are already many settings of interest where we *have* prior knowledge of a good hierarchy/goal function. This is because we humans have often (and successfully) taken the hierarchical approach to build up to and produce these long-form creations. So we know what are good goals to set e.g. we write essays by first writing an outline of arguments, then expanding out each point in the outline.
In such settings, the algorithms we develop can already help to scale up bounded feedback and enable scalable oversight.
2. Indeed, this approach of explicitly encoding prior knowledge in the learning algorithm is common in both GC-HRL literature (e.g. we know apriori mazes consist of rooms [22]) and scalable oversight literature (e.g. books consist of chapters [27]).
3. Outside of such settings, we agree it is an open problem to learn apt hierarchical decompositions/goals (L329-30). It is an exciting direction that could realize end-to-end scalable oversight [15].
```
When discussing the bounds related to the second algorithm (Section 4), it was necessary to introduce several constants, but they were not discussed in more detail, providing, for example, intuitions on the magnitude of such constants. Without it, it is more challenging to understand the significance of the bounds…There were a few variables/subscripts used throughout the paper that, I believe, were not formally introduced…Could the authors please add clarifications about these variables?
```
Thank you for your careful reading, we will definitely clarify these notations as you suggest! We have also centralized the notation in Table 1 of the appendix, which we will also add to and link in the main paper for added clarity.
---
Rebuttal Comment 1.1:
Comment: Thank you for the added details! After reading your response and the other reviews/discussions, I still believe the paper should be accepted.
---
Reply to Comment 1.1.1:
Title: Reply to Reviewer 2zMJ
Comment: Thank you for getting back to us and taking the time to read through everything (which is quite a bit), we appreciate it! | null | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Towards Unraveling and Improving Generalization in World Models | Reject | Summary: This paper investigates the generalization capabilities of world models in RL, particularly with respect to latent representation errors, which arise when observations are encoded into a low-dimensional latent space. The authors provide a bound on latent representation error when using CNN encoder-decoder architectures. The world model is framed as a stochastic differential equation to characterize the impact of latent representation errors on generalization in terms of either zero or non-zero drift. The authors provide theoretical analysis which shows that these errors can result in implicit regularization in the zero drift case, and propose a Jacobian regularization scheme to tackle the unwanted bias term in the non-zero drift case. Finally, when performing model rollouts for learning a policy, the authors study the effect of these errors on the value function. Experiments on Mujoco tasks demonstrate that the proposed Jacobian regularization enhances robustness to noisy states, reduces the detrimental impact of latent representation errors, and improves convergence speed for longer horizon tasks.
Strengths: - World models are a popular area of research in the RL community, but there is a lack of theoretical understanding. This paper takes one step towards theoretically analyzing the generalization capabilities of world models.
- The analysis of the effect of latent representation error is a novel theoretical contribution, to the best of my knowledge.
- The results in the paper seem mathematically sound and provide useful insights. The empirical results demonstrate that the Jacobian regularization, which naturally arises from the theoretical analysis, is helpful in improving robustness.
- As a very theory-heavy paper, the authors structured the writing such that it makes it easy to follow each individual result (though there is some room for improvement here, see weaknesses).
Weaknesses: While the paper studies a previously unexplored problem, there are some questions about the significance of these findings and the use of drift and diffusion terms to represent the error. Other areas for improvement include explaining the insights from the theoretical analysis more clearly, describing the experimental settings in more detail, and supporting certain claims with more evidence.
- Studying the effect of latent representation error is certainly useful, however, with recent advances in representation learning approaches, one can learn reasonably good representations such that the reconstruction error is negligible. When it comes to model-based RL, a much bigger issue is the compounding model error, which is a result of error in the latent/state dynamics model predictions. A comment from the authors on this aspect would be helpful.
- The decomposition of latent error into drift and diffusion terms seems a bit contrived. It is not clear how the error can be expressed in this form, and what defines the scenarios of zero versus non-zero drift.
- The interpretation that propagation of latent error leads to the model exploring novel states seems somewhat questionable. My understanding is that the erroneous states improve robustness similar to noise injection, but will most likely not be valid states belonging to the state space of the MDP. Some reasonable evidence is required to support this statement.
- The paper presents several results and including some intuitive or low-level explanation for each of those results would greatly improve readability. Additionally, due to the large amount of mathematical notation used throughout the paper, it would be helpful to include a notation table in the appendix for easy reference.
- The experimental setting is not sufficiently clear, especially in the introduction when the authors refer to Table 1. With regards to the perturbations - are they applied to every state in the trajectory? For masking, is the same mask used for every state, or is the mask also sampled randomly? With regards to injecting encoder error - how to interpret the $\mu_t$ and $\sigma_t$ values?
Technical Quality: 3
Clarity: 2
Questions for Authors: - I am not sure I fully understand the relation of batch size with the latent representation error in Table 1. Since one would take the mean over the batch size, it should not affect the error magnitude. Is the variation in performance due to stochasticity of the SGD updates? If so, how does it relate to latent representation error? Also, as mentioned in weaknesses, the experimental setting should be clearly explained at this point in the paper.
- What do the drift and diffusion terms corresponding to the latent representation error signify?
- Theorem 3.7 suggests that the regularizing effect of the latent representation error can be attributed to the Hessian of the loss function, which encourages wider minima. As noted by the authors, this term is non-negative only if the loss function is convex. Could the authors comment on how to interpret this result for non-convex loss functions (which is usually the case in deep learning)?
- The analysis in this paper focuses on Dreamer style models which use an RNN to represent the latent dynamics. A parallel line of work, TD-MPC [1], uses an MLP model to predict the next state which is applied recursively. I am interested to know if the authors have any thoughts on the applicability of their analysis to such methods, and if there are any major differences.
- An alternate method to improve robustness to perturbations is by training the model/value function on different augmentations of the states [2]. I understand the limitations of time during the short rebuttal period, but an empirical comparison would significantly enhance the paper. I am also curious to know the authors’ thoughts on the pros/cons of using the Jacobian regularization term over such data augmentation methods.
- How much computational overhead is added by the calculation of the Jacobian regularization term?
[1] Hansen, N.A., Su, H. and Wang, X., 2022, June. Temporal Difference Learning for Model Predictive Control. In *International Conference on Machine Learning* (pp. 8387-8406). PMLR.
[2] Yarats, D., Kostrikov, I. and Fergus, R., 2021, May. Image augmentation is all you need: Regularizing deep reinforcement learning from pixels. In *International conference on learning representations*.
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: There is little discussion on the limitations of the analysis. Some points worth discussing could be the impact of various assumptions when deriving the results, the fact that the analysis is mostly focused on a specific setting - learning from pixels using a CNN encoder and an RNN latent dynamics model, and further investigation of the compounding model error problem.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you very much for your helpful feedback on our work! We respond to your questions as follow:
### **A1. On the analysis of compounding model error**
Thank you for highlighting this important aspect. We agree that compounding model error is critical in world model analysis. During both training and predictive rollouts, even if the latent representation error is small at each time step, its accumulation through the dynamics model's predictions throughout the trajectory is significant. Our results show that in the training phase, the accumulated effects of zero-drift errors can act as implicit regularization (which is somewhat surprising but consistent with noise injection technique used in deep learning [1, 2]), while non-zero drift errors require additional regularization. During predictive rollouts, accumulated errors can lead to divergence if not properly regularized. Importantly, the degree of these effects depends on the dynamics model’s Jacobian. Hence, we propose Jacobian regularization.
Our theoretical results explicitly consider these compounding effects by interpreting world models as SDE dynamical systems to characterize error propagation. For instance, in Theorem 3.7 and Corollary 3.8, the derived regularization terms $\mathcal{R}$ and bias term $\tilde{\mathcal{R}}$ account for the accumulative effects of continuously introduced encoder errors on the sequence model $f$ and transition predictor $p$. This means that the effects of encoder errors from the initial time step to the entire trajectory of state predictions are captured in $\mathcal{R}$ and $\tilde{\mathcal{R}}$.
Regarding the compounding model error due to additional inaccuracies in the latent/state dynamics model predictions, our main results can be generalized to include errors from other model components, such as transition predictor error. For example, one could incorporate error terms $\sigma$ as $(\sigma_{\text{enc}}, 0, \sigma_{\text{pred}}, 0)$ and $\bar{\sigma}$ as $(\bar{\sigma}\_{\text{enc}}, 0, \bar{\sigma}\_{\text{pred}}, 0)$ in Theorem 3.7 and Corollary 4.2.
1. Alexander Camuto et al., "Explicit Regularisation in Gaussian Noise Injections," Advances in Neural Information Processing Systems 34 (NeurIPS 2020), 2020.
2. Soon Hoe Lim et al., "Noisy Recurrent Neural Networks," Proceedings of the 35th Conference on Neural Information Processing Systems (NeurIPS 2021), 2021.
### **A2. On the drift and diffusion components of error (and Q2)**
Thank you for the question. In our case, the decomposition of latent representation error into drift and diffusion terms arises naturally as additive noise to the encoder, which is interpreted as an SDE process with drift and diffusion coefficient functions matching the stochastic design of the world model. In general, since the error process is a stochastic term, it is natural to consider its mean (drift) and variance (diffusion) decomposition, a common approach in stochastic control (see [3] for more details).
The case of zero-drift noise occurs when the learned encoder is unbiased but has variance. In contrast, the case of non-zero-drift noise corresponds to a more general situation where the learned encoder is biased and has variance. These are general settings that occur in world model learning.
3. Ramon van Handel, "Stochastic Calculus, Filtering, and Stochastic Control: Lecture Notes," Spring 2007.
### **A3. On the interpretation of latent representation error encouraging state exploration**
Thank you for your insightful comment. We respectfully disagree with the reviewer’s assertion that erroneous latent states lead the agent to explore invalid states outside the state space of the MDP. We note that since the agent’s policy in the world model is conditioned on latent states, erroneous latent states $\tilde{z}$ lead the agent to take inaccurate actions $\tilde{a}$. The gap in the predicted value function due to latent representation error is captured in Corollary 4.2.
From the perspective of the task environment, the agent begins at a valid state $s$ and takes the inaccurate action $\tilde{a}$, but will still transition to a valid next state $\mathcal{T}(s, \tilde{a})$. This suboptimality can be interpreted as implicitly encouraging exploration by introducing a stochastic perturbation in the value function. Therefore, the agent remains within the valid state space while experiencing robustness similar to noise injection, as you suggested.
### **A4. On including low-level intuition for each result and overall readability**
We appreciate your helpful suggestion. We will include a notation table in the appendix for easy reference. We will also provide more intuitive explanations for our theoretical results, expanding on the current ones, including
Theorem 3.7: “The presence of this Hessian-dependent term S, under latent representation error, implies a tendency towards wider minima in the loss landscape…” (page 6, 234-235)
Corollary 3.8: “The presence of $\tilde{\zeta}$ in ,$\tilde{Q}$ and $\tilde{S}$ induces a bias to the loss function with its magnitude dependent on the error level $\epsilon$, since $\tilde{zeta}$ is a non-zero term influenced on the drift term $\sigma$.” (page 7, 260-262)
Theorem 4.1: “... the expected divergence from error accumulation hinges on the expected error magnitude, the Jacobian norms within the latent dynamics model and the horizon length T.” (page 7, 300-302)
We believe these additions will greatly improve the readability and accessibility of our results.
_**Rebuttal will continue in the comments.**_
---
Rebuttal 2:
Title: Rebuttal 2/4 for Reviewer waKG
Comment: ### **A5. On the clarification of experiment settings**
We apologize for the confusion and will clarify further in the experiment section in the appendix. For the batch-size versus robustness experiment in Table 1, the considered perturbation methods are the same as those in the experiment on Jacobian regularization with perturbed states (Table 2 and Appendix D.2).
Regarding to your specific questions:
_—”With regards to the perturbations - are they applied to every state in the trajectory?”_
Yes, the perturbations are applied at every time step. This consistency is maintained, particularly in the robustness experiments with encoder noise, to align with our theoretical results, which studied continuously introduced encoder noise.
_—”For masking, is the same mask used for every state, or is the mask also sampled randomly”_
The masks are sampled randomly for each state, not the same mask for every state.
_—”With regards to injecting encoder error - how to interpret the $\mu_t$ and $\sigma_t$ values?”_
We consider a Gaussian noise process where at each time step, where $\mu_t$ amd $\sigma_t$ denotes mean and variance of the Gaussian process at time $t$ . An explanation is provided in the appendix (see pages 28, line 756, and pages 30, lines 764-765).
In the revised version, with more page space available, we will include further clarification of the experimental settings in Section 5 from the main text to avoid any confusion.
### **A6. Responses to Questions**
### Q1: On the gradient estimation errors in Table 1
Thank you for your insightful question.
We used batch size experiments as a motivating example to introduce the relationship between error and generalization robustness. Batch-induced gradient estimation errors are relatively well-studied, whereas latent representation error is often overlooked. Our theoretical results on the effects of latent representation error interestingly align with the discovered empirical effects on gradient estimation error in the zero-drift case:
- Zero-mean error: Zero-mean errors, such as gradient estimation error and zero-drift latent regularization errors, have the potential for generalization improvement.
- Controlling the error: To harness these generalization gains, the batch size can influence gradient estimation error. When kept within a controlled range, this can enhance generalization—paralleling the control of latent representation error through Jacobian regularization.
In the revision, we will further clarify these points to avoid any confusion and provide a clearer explanation of the experimental settings of Table 1 in Section D in appendix.
### Q3: Interpreting results for non-convex case
Thank you for the interesting question.
In the non-convex case, our most important finding from Corollary 3.8 remains relevant: the additional bias induced by the drift term of latent representation error can be controlled through the model’s Jacobian norm. While non-convex optimization presents more challenges due to the complicated gradient landscape, this control mechanism provides a valuable tool.
For further insights, non-convex optimization remains an open challenge, and we acknowledge the significant complexity of its gradient landscape. We are excited to consider future work that focuses on specific types of non-convex loss functions to extract more detailed insights and refine our understanding of the regularizing effects in these scenarios.
### Q4: Extending the analysis on TD-MPC
For TD-MPC, which has a deterministic latent dynamics model, our SDE analysis can specialize to TD-MPC by setting all diffusion coefficient functions to zero in formulation equations (5) - (8), thus becoming systems of ODEs, provided that the learned MLP model in TD-MPC satisfies certain continuity assumptions to guarantee the unique existence of solutions.
Regarding the effects of latent representation error in TD-MPC, this would resemble a simplified version of Corollary 3.8 without diffusion terms. The result would similarly introduce a bias to the loss function, with its magnitude dependent on the error level. This effect can be modulated by the MLP’s input-output Jacobian norm. We are excited about future work to develop the theoretical details and experimental results in this context.
---
Rebuttal 3:
Title: Rebuttal 3/4 for Reviewer waKG
Comment: ### Q5: — “An alternate method to improve robustness to perturbations is by training the model/value function on different augmentations of the states”
We agree that training the model/value function on different augmentations of the states is a valuable approach to improving robustness to perturbations. Following your suggestion, we added a new set of data augmentation experiments for comparison. We considered training with state images augmented with randomly-masked Gaussian noises. We evaluated it against 3 different perturbations: 1) modifying gravity constant g in the DMC walker environment 2) adding rotation to input image state 3) adding masked Gaussian noise to input image state.
The experiment results show that under varied gravity g and rotation perturbation, models trained with Jacobian regularization outperform state augmentation. Under masked Gaussian noise perturbation, Jacobian regularization outperforms state augmentation when the noise is more dense (mask with $\beta=70, 60$) and underperforms when $\beta=40, 50$. This may be due to the fact that state augmentation is trained with very similar Gaussian noise. State augmentation works well when the augmentation is consistent with the perturbation in inference, but does not generalize to unseen perturbations. Please see below table and Figure 3 in attached PDF for details.
| Gravity($m/s^2$) g | g = 9.8 (default)| g = 6 | g = 3 | g = 1
|---------------------------------|--------------------------|-------------------------|-------------------------|-------------------------|
| Aug w. $\mathcal{N}(0.15, 0.1)$ | 847.19 ± 131.85 | 771.34 ± 88.112 | 550.4 ± 75.8 | 390.7 ± 94.28 |
| Jac Reg ($\lambda = 0.01$) | **920.24 ± 39.952** | **906.42 ± 42.664** | **798.02 ± 95.936** | **603.88 ± 162.224** |
| Rotation ($^\circ$) $\alpha$ | $\alpha = 20$ | $\alpha = 25$ | $\alpha = 30$
|---------------------------------|--------------------------|-------------------------|-------------------------|
| Aug w. $\mathcal{N}(0.15, 0.1)$ | 286.63 ± 81.678 | 284.09 ± 59.801 | 213.93 ± 42.44 |
| Jac Reg ($\lambda = 0.01$) | **423.81 ± 12.9** | **301.84 ± 20.26** | **226.04 ± 23.00** |
| $\beta\\%$ masked $\mathcal{N}(0.5 ,0.15)$ | $\beta = 40$ | $\beta = 50$ | $\beta = 60$ | $\beta = 70$
|-------------------------------------------|--------------------------|-------------------------|-------------------------|-------------------------|
| Aug w. $\mathcal{N}(0.15, 0.1)$ | **846.76 ± 46.928** | **767.92 ± 78.256** | 373.08 ± 64.056 | 247.68 ± 54.576 |
| Jac Reg ($\lambda = 0.01$) | 804.21 ± 80.369 | 725.81 ± 50.714 | **730.87 ± 65.263** | **687.35 ± 63.222** |
The limitations of the rebuttal period prevent us from conducting a comprehensive empirical comparison at this time (e.g. more augmentation patterns), but we are excited to continue this line of research in our future works.
— “on the pros/cons of using the Jacobian regularization term over such data augmentation methods.”
Regarding the pros and cons of using the Jacobian regularization over data augmentation, we note the following:
Pros:
- Theoretical guarantees: Jacobian regularization is a principled approach grounded in theoretical results (Theorem 3.7 and Corollary 3.8), providing explicit control over model behavior in response to small error.
- Less reliant on data diversity: Unlike data augmentation, which heavily relies on diversity and relevance of augmented samples, Jacobian regularization targets the learning dynamics of WM.
- Less likelihood of overfitting: Jacobian regularization is less prone to overfitting as unlike data augmentation, which can lead to overfitting when overly reliant on certain perturbation patterns rather than general robustness.
- Bounding trajectory divergence: Jacobian regularization also mitigates error propagation during predictive rollouts (Theorem 4.1), a benefit not achieved by data augmentation.
Cons:
- Uncertainties with large error: The theoretical analysis for Jacobian regularization assumes the latent representation error is small. In cases when the encoder remains poorly learned, data augmentation may provide better model robustness.
- Computational overhead: computing Jacobian terms can introduce additional overhead
### Q6: On the computational overhead of Jacobian regularization term
For every episode with 500 steps on an A100, training the model with Jacobian regularization took 28.702 seconds compared to 22.199 seconds for training without it (averaged over 20 episodes).
---
Rebuttal 4:
Title: Rebuttal 4/4 for Reviewer waKG
Comment: ### **A7. Response to Limitation**
We appreciate you raising this point. In the revision, with more page space, we will include a separate section addressing the limitations of our work. We will highlight that the main limitation of the SDE interpretation of world models (WM) is its restriction to a popular family of world models for theoretical soundness. As mentioned in the paper (lines 123-125), "we consider a popular class of world models, including Dreamer and PlaNet, where $\{z, \tilde{z}, \tilde{s}\}$ have distributions parameterized by neural networks’ outputs, and are Gaussian when the outputs are known."
Additionally, our results on the approximation error of latent representation focus on CNN (and similar) models, and future work is needed to generalize the study to models such as transformers. We are exploring this direction in our future work.
---
Rebuttal Comment 4.1:
Comment: I thank the authors for their detailed responses, which clarify several questions regarding the interpretation of the drift and diffusion coefficients, the exploration of novel states, and the experiment details.
Additionally, the comparison with data augmentation methods enhances the empirical results and provides a more well-rounded understanding of the proposed Jacobian regularization method.
I increase my score to reflect the changes.
---
Reply to Comment 4.1.1:
Comment: Thank you for your thoughtful feedback and for considering our responses. We are pleased that our clarifications and the comparison with data augmentation methods were helpful. We greatly appreciate your updated score and the opportunity to improve our work. | Summary: The paper studies the generalization capability of world models via a stochastic differential equation formulation. They try to understand latent representation errors on generalization, with both zero-drift representation errors and non-zero-drift representation errors. They found that zero drift latent representation errors are implicit regularization and thus bring generalization gain. Jacobian regularization is proposed to enhance training stability and generalization.
Strengths: + A deep understanding of the generalization of world models via stochastic differential equation formulation;
+ A careful study of the different effects of zero drift and non-zero drift on gn
Weaknesses: + The unseen images are produced via global/partial Gaussian noises and rotation, which seems more on the robustness side rather than the generalization of unseen images;
Technical Quality: 3
Clarity: 3
Questions for Authors: See the Weaknesses part and explain how to extend the analysis to more general cases.
Confidence: 2
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: NA
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you very much for your helpful comment!
- _“The unseen images are produced via global/partial Gaussian noises and rotation, which seems more on the robustness side rather than the generalization of unseen images;”_
Following your valuable feedback, we added a new set of experiments involving unseen dynamics to evaluate the robust generalization of our proposed Jacobian regularization. For Mujoco tasks, we varied the acceleration constants due to gravity (default being 9.8 m/s²). This tests the learned agent’s model and the latent dynamics model’s generalization capacity to unseen dynamics. Our results indeed validate that Jacobian regularization (both $\lambda = 0.01$ and $0.05$) outperforms the baseline. We report the mean and variants of eval returns across 5 eval runs in the following table. Please also see Figure 3 in the attached PDF for a plot of varied g vs eval returns.
| Gravity($m/s^2$) | Baseline | Jac Reg ($\lambda=0.01$) | Jac Reg ($\lambda=0.05$) |
|------------|--------------------------|-------------------------|-------------------------|
| $g=1.0$ | 381.14 ± 132.968 | 603.88 ± 162.224 | **668.86 ± 89.072** |
| $g=3.0$ | 569.64 ± 65.048 | 798.02 ± 95.936 | **717.22 ± 95.056** |
| $g=6.0$ | 750.36 ± 122.248 | **906.42 ± 42.664** | 830.64 ± 62.848 |
| $g=9.8$ (default) | 936.32 ± 29.176 | **920.24 ± 39.952** | 904.918 ± 34.4944 |
We also acknowledge the challenges of setting up experiments for completely unseen/unrelated states and transitions in RL due to the complexity of defining such scenarios comprehensively. We considered local/global Gaussian noise, rotation, and varying acceleration constants due to gravity as perturbations. These were designed to best validate our theoretical findings, specifically Theorem 3.7 and Corollary 3.8, on implicit regularization, which link to favoring model solutions in regions of the loss landscape with improved generalization and robustness. Our approach is consistent with the literature's understanding of robust generalization [1, 2].
In the revised experiment section, we will clarify the distinction between robust generalization and more generic generalization to avoid any possible confusion.
1. Binghui Li et al., "Why Robust Generalization in Deep Learning is Difficult: Perspective of Expressive Power," Proceedings of the 36th Conference on Neural Information Processing Systems (NeurIPS 2022), 2022.
2. Soon Hoe Lim et al., "Noisy Recurrent Neural Networks," Proceedings of the 35th Conference on Neural Information Processing Systems (NeurIPS 2021), 2021.
---
Rebuttal Comment 1.1:
Comment: Thank you for your valuable feedback and for taking the time to review our work. We have carefully addressed your comments in our responses.
We kindly remind you to review our responses, and we are happy to address any further questions you may have.
Thank you once again for your thoughtful insights and for contributing to the improvement of our work.
---
Rebuttal Comment 1.2:
Comment: The authors clarified my major concern about whether the tackled aspect is robustness or generalization. The added new experiment is very helpful.
---
Reply to Comment 1.2.1:
Comment: We are glad that the additional experiment and clarification was helpful in addressing your concerns. Thank you very much for your updated score and your valuable insights. | Summary: This paper explores the generalization capability of world models in reinforcement learning. In particular, they investigate the latent representation error in world models. They show that zero-drift representation error is inherently a regularizer for the learned model functions. On the other hand, they show that the non-zero-drift representation error accumulates errors and Jacobian regularization can be used to alleviate the issue. They demonstrate their proposed approach improves stability, convergence, and performance.
Strengths: 1. This work investigates an interesting aspect, the generalization of world models that learn the dynamics of the environment. Very limited work has been done in this facet of RL, thus it will share significant insights with the DRL research community.
2. The paper followed a structured methodology to analyze the world model and its representation errors. They interpret the learned model function as stochastic differential equations (SDEs) and model the variation as Brownian motions.
3. I liked the way they theoretically analyzed it case-by-case and established connections with prior findings.
4. The paper articulately presents the findings of zero-drift error as a regularizer and the Jacobian correction term for non-zero-drift representation error. It systematically proves its hypotheses and shows evidence against the claims. They presented corresponding formulas and interpretations.
Weaknesses: 1. The paper is very thorough in terms of theoretical derivation. However, in my opinion, the experimental section of the paper is somewhat lacking. It utilizes only two tasks from Mujoco to prove the efficacy of the approach. More diverse tasks from other benchmarks and robust perturbations will certainly improve the paper.
2. The experimental evaluation is limited to reward comparison. However, it would be interesting to see some visualization of how the trajectories unfold in the case of both types of errors and with Jacobian regularization.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. It appears in different tables different values of $\lambda$ (the regularization weight in eq. 20) have been used. How sensitive the models are to the value of this hyperparameter? Do you have any suggested range for better performance?
2. Do you observe any substantial relation between $\lambda$ and the task horizon?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: While the paper discusses the potential social impact of the work, it doesn’t discuss any limitations. I believe the characterization of the models as SDE and the use of Brownian motion as variation have certain contributions to the identified claims. Other interpretations may alter the findings.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your positive feedback and suggestions! Based on your valuable feedback, we have improved the empirical analysis of our proposed regularization schemes with some new experiments and visualizations.
### **A1. Additional Experiments on Benchmark Environment**
Thanks for the constructive suggestion. We intend to add 1-2 additional experiment environments to showcase robustness against perturbation brought by the Jacobian regularization in the final paper. Here we include a comparison of baseline method and Jacobian regularization in the challenging 2D environment -- Crafter [1] under noise perturbation. We add a Gaussian noise with 0 mean and 0.25 variance on the image state whose value ranges from -0.5 to 0.5 and apply masks with various percentages.
| mask $\beta \%, N(0, 0.25)$ | Baseline | Jac Reg ($\lambda= 0.1$) | Jac Reg ($\lambda = 0.01$) | Jac Reg ($\lambda = 0.001$) |
|-------|---------|----------|-----------|---------|
| $\beta=20\%$ | 9.1 | 16.96 | 17.14 | 23.18 |
| $\beta=60\%$ | 8.51| 14.11 | 14.03 | 17.47 |
| $\beta=100\%$ | 6.45 | 13.84 | 10.29 | 14.89 |
We would like to include a note that within the very limited rebuttal timeframe, the baseline method finished fewer steps than its Jacobian counterparts, thus earlier checkpoints are used for baseline in this comparison. We observed that the baseline performance has already plateaued according to the train/eval curves, although its performance can still improve with more training steps. We will keep the baseline experiment running and update this table should there be any change in the performance.
[1]: Danijar Hafner, 'Benchmarking the Spectrum of Agent Capabilities,' 2021
### **A2. Visualization of Reconstructed State Trajectories of Models Trained with/without Jacobian Regularization**
Thanks for your great suggestions. We have added visualizations of reconstructed state trajectory samples in the revision to showcase the error propagation of exogenous zero-drift and non-zero drift error signals in latent states with and without Jacobian regularization. Please see Figure 1 & 2 in the attached PDF file.
As shown in the figure, the reconstructed states for the baseline model without Jacobian regularization appear fuzy, indicating the model has not correctly captured the dynamics of the environment; whereas the reconstructed states for model with Jacobian regularization are sharp and correctly reflect the dynamics of the environment. The visual comparison highlights the robustness brought by Jacobian regularization against latent noises.
### **A3. Responses to Questions**
### Q1. On the effects of $\lambda$ on model’s generalization
Thanks for raising the question. The choice of $\lambda$ can have subtle influence on the model's performance. The optimal value for $\lambda$ depends on the environment and the task. We suggest readers to try out $\lambda$ in the following ranges [0.1, 0.01, 0.001]. Please see table in A1, where we present crafter scores for 3 different values of $\lambda$ under a Gaussian noise with 0 mean and 0.25 variance on the input image states (where pixel values range from -0.5 to 0.5) with various masks.
We will include a section to discuss $\lambda$'s influence for different environments in the final version of the paper.
### Q2. On the relation between $\lambda$ and task horizon
We did not observe a substantial relationship between $\lambda$ and task horizon in our current experiments. However, we hypothesize that as task horizon gets longer, larger regularization weights ($\lambda$ around 0.1) would be more beneficial due to their tighter control on error propagation. Due to time limitations during the rebuttal period, we were unable to conduct these additional experiments, but we plan to study this further in future works.
### **A4. Response to Limitation**
We appreciate you raising this point. In the revision, with more page space, we will include a separate section addressing the limitations of our work. We will highlight that the main limitation of the SDE interpretation of world models (WM) is its restriction to a popular family of world models for theoretical soundness. As mentioned in the paper (lines 123-125), "we consider a popular class of world models, including Dreamer and PlaNet, where $\{z, \tilde{z}, \tilde{s}\}$ have distributions parameterized by neural networks’ outputs, and are Gaussian when the outputs are known."
In addition, our results on the approximation error of latent representation focus on CNN (and similar) models and future work is needed to generalize the study to models such as transformers. We are exploring this direction in our future work.
---
Rebuttal Comment 1.1:
Comment: Thank you for your positive feedback. We have carefully addressed your comments in our responses.
We kindly remind you to review our responses, and we are happy to address any further questions you may have.
Thank you once again for your thoughtful insights and for contributing to the improvement of our work. | null | null | Rebuttal 1:
Rebuttal: Dear reviewers,
We sincerely thank your comments and constructive suggestions. Below, we address the reviewer’s concerns point by point. We also attach a one-page PDF for visualizations and graphs.
Pdf: /pdf/41ad33ecd4cbef933c30e3d67a10108fa3094862.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Nearly Optimal Approximation of Matrix Functions by the Lanczos Method | Accept (spotlight) | Summary: The authors investigate the convergence of the Lanczos method for computing $f(A)v$ for a matrix $A$ and a vector $v$. They show a near instance optimality bound that involves the product of the condition numbers of matrices related to $A$ and empirically investigate the bound.
Strengths: The authors investigate the convergence of the Lanczos method for computing matrix functions theoretically and numerically. The results are also meaningful for the machine learning community since matrix functions appear in many machine learning algorithms.
Weaknesses: I think the authors should explain more about how we can interprete the proposed bound. The authors insist the bound captures the convergence behavior of the Lanczos method. The proposed bound involves the condition numbers of the matrices appearing in the denominator of the rational function. I think it is natural, but I couldn't clearly understand why the proposed bound can explain the convergence behavior better than the existing bound in Fact 1, since if the condition numbers of these matrices are large, than the magnitude of $f$ in the bound in Fact 1 can be large, and the factor involving $f$ can also become large.
Technical Quality: 3
Clarity: 3
Questions for Authors: - Can you generalize your results to other types of Krylov subspace methods such as rational Krylov methods?
- In the middle and the right figures in Figure 1, the relative error seems very small. Did you use higher precision floating point format than double precision?
Confidence: 2
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: The authors discuss limitations in Section 4.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your thoughtful feedback. We will respond to each of the questions in turn:
> “Can you generalize your results to other types of Krylov subspace methods such as rational Krylov methods?”
It would be interesting to see whether bounds of the flavor we describe can be extended to rational Krylov subspace methods. However, the class of functions we are studying are already rational functions. If one were to use a rational Krylov subspace method (with a suitably chosen set of poles), these functions could be applied exactly. So it is somewhat less clear exactly what the generalization would look like.
> “In the middle and the right figures in Figure 1, the relative error seems very small. Did you use higher precision floating point format than double precision?”
Yes, we used higher precision arithmetic. The precise setup is described in the first paragraph of Section 3.
> “I couldn't clearly understand why the proposed bound can explain the convergence behavior better than the existing bound in Fact 1”
It is true that our main bound (Theorem 1) is weaker if the condition number of each $A_i$ is large. This means that, as Figure 2 shows, it is sometimes better than Fact 1 and sometimes worse. See the discussion below Theorem 1, at the bottom of page 4. However, importantly, while it depends on the condition numbers, our leading constant factor $C$ does *not* depend on the number of Lanczos iterations. On the other hand, the term $\min_{\mathrm{deg}(p) < k-q+1} \||r(A)b - p(A)b\||_2$ in Theorem 1 does: it decreases with the number of iterations, $k$, typically at a much faster rate than Fact 1. For this reason, Theorem 1 usually beats Fact 1 eventually as the number of iterations increases. If our current $C$ factor can be improved (we suspect it can be), then Theorem 1 would beat Fact 1 earlier.
As an extreme example, it might help to consider the simplified case of a 2x2 matrix with only two eigenvalues: $\lambda_{\min}$ and $\lambda_{\max}$. Let’s compare Definition 2, which is the form of our main theorem, to Fact 1. Both of them upper bound the error by a minimization problem over polynomials of a certain degree. To get a good guarantee from Fact 1, we would have to find a degree-$k$ polynomial that closely matches $f$ on the entire range $[\lambda_{\min}, \lambda_{\max}]$. When the condition number of $A$ is large, this range will be large, and so this will be impossible. However, to get a good guarantee from Definition 2, we only need to find a polynomial that matches $f$ at two discrete points: $\lambda_{\min}$, and $\lambda_{\max}$. This is much easier to do. In fact, if $k \geq 1$, then our bound guarantees that we get zero error for any $\lambda_{\min}$, $\lambda_{\max}$, and any $f$. Fact 1 does not capture this behavior.
---
Rebuttal Comment 1.1:
Comment: Thank you for the rebuttal. After reading the rebuttal and reviews from other reviewers, I decided to raise my score. | Summary: A matrix function f(A) for a real, symmetric matrix A and a univariate function f is given as \sum f(\lambda_i) u_i u_i^\top, where \lambda_i are the eigenvalues of A and u_i the corresponding eigenvectors. In practice, one is often interested in matrix-vector products f(A)b for some vector b. In the paper, the approximation quality of the Lanczos-FA method for approximating f(A)b for rational functions f is analyzed theoretically and experimentally. In particular, a known bound that is uniform in the A and b is improved to instance specific bounds using the notion of near instance optimality. The instance specific bounds are not tight. However, they can better explain better the experimentally observed good approximation quality of the Lanczos-FA method that often outperforms more advanced Krylov subspace methods.
Strengths: Originality: The paper provides new theoretical insights into the observed practical behavior of the Lanczos-FA method. It contributes to a better understanding of Krylov subspace methods. The results are non-trivial.
Clarity: The paper is well is structured and, in general, easy to read. I did not read the full proof of Theorem 1, but appreciate the proof sketch in Section 2.1.
Significance: As far as I can tell, the paper addresses an interesting, practically relevant problem. The paper does not only present upper bounds but also aims to find instances for the Lanczos-FA.
Weaknesses: As far as I can tell, there are no major weaknesses.
Minor typos and suggestions for improvement:
Introduction: Bayseian
Section 1.4: ... functions of interest(ing) ...
Footnote 2: Should \|x\|_A be \|b\|_a?
Line 148: The (second factor) of the second term ...
Figure 2: The axis labels are hardly legible.
Figure 5: matvec -> matrix-vector product
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. I like the proof sketch in Section 2.1, which was easy to follow for a non-expert like me. The only thing that I did not see immediately was the equality below Line 145. It is probably an elementary result, but maybe you can provide a few more insights why this equality holds true.
Is well addressed in the author's rebuttal.
2. In the second paragraph of Section 3 (Experiments): Why did you choose the matrices (or the spectra of the matrices) as you did? How do practical spectra look like? What are extreme/challenging spectra?
Is also well addressed in the author's rebuttal.
Confidence: 2
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: As far as I can tell, there are no limitations. The bounds are not tight, but I do not consider this a limitation of the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the thoughtful feedback! We will address the two questions in turn:
The first asks for clarification about Line 145. The projection of a vector $x$ onto a linear subspace $S$ is defined as $\underset{y \in S}{\operatorname{argmin}} \||y - x\||_2$. The $A$-norm projection is defined as $\underset{y \in S}{\operatorname{argmin}} \||y - x\||_A = \underset{y \in S}{\operatorname{argmin}} \||A^{1/2} y - A^{1/2} x\||_2$. In our case, the linear subspace is the Krylov subspace $K_k(A,b)$. By definition, $Q$ is a basis for this subspace. Therefore, any $y \in K_k(A,b)$ can be written as $Qc$ for some $c$. Thus, the $A$-norm projection can be written $\underset{c}{\operatorname{argmin}} \||A^{1/2} Q c - A^{1/2} x\||_2$. Solving this least squares problem using the normal equations, we find that $c = (Q^\top A Q)^{-1} Q^\top A x = T^{-1} Q^\top A x$. Therefore, the projection is $y = Qc = Q T^{-1} Q^\top A x$. In other words, $\||x - Q T^{-1} Q^\top A x \||_2 = \underset{y \in K_k}{\operatorname{min}} \||x - y\||_2$. Finally, by definition, any vector in the Krylov subspace can be written as $p(A)b$ for some polynomial $p$ with degree $< k$, so we have $\underset{y \in K_k(A, b)}{\operatorname{min}} \||x - y\||_2 = \underset{p}{\operatorname{min}} \||x - p(A)x\||_2$. To finish the argument, replace $x$ with $A^{-2}b$. We will modify the proof sketch to clarify this step.
The second asks about the choice of matrices for our experiments. First, note that all Krylov subspace methods are equivariant to orthogonal transformations, and the errors $\||f(A)b - \mathrm{alg}\||$ are invariant to such transforms (in the Euclidean norm). So, without loss of generality, we choose $A$ to be a diagonal matrix, as is standard in the literature.
It is well understood in the literature that the eigenvalue distribution has a large influence on the performance of Krylov subspace methods, often in subtle ways. For instance, see reference [8], “Towards understanding CG and GMRES through examples”. We used uniform, skewed, and clustered eigenvalue distributions to get some variety in the convergence behavior (as reflected in the different shapes of the curves in Figure 2). In particular, eigenvalue distributions that are clustered at the ends of the spectrum often have interesting behavior. A non-uniform distribution can emphasize the difference between the uniform optimality of Fact 1 and the instance optimality of our bounds. As we report in Section 3.1, distributions with a single outlying eigenvalue are among the most challenging for Lanczos-FA.
What practical spectra look like depends a lot on the specific problem, but they are often characterized by the presence of many small eigenvalues and a smaller number of large outlying and clustered eigenvalues (hence, why we consider skewed and clustered eigenvalue distributions).
We also thank you for spotting these typos. They will all be corrected in the revision.
---
Rebuttal Comment 1.1:
Comment: Thank you for answering my questions. I like your paper. To make a clear statement, I will increase my score to accept. | Summary: The submission discusses the optimality (in terms of approximation error) of approximation $(A, b) \rightarrow f(A)b$ with Lanczos' algorithm.
The main result (Theorem 1) states that the Lanczos iteration is "near instance optimal". Near-instance-optimality relates to the optimal reconstruction of $r(A)b$ for rational functions $r$.
Via the triangle inequality, one can deduce similar statements for general $f$ provided $f$ can be approximated well by rational functions.
The submission proves Theorem 1 by repeatedly applying a similar optimality result for $A^{-1} b$ (i.e. $f=1/x$), which is why the final bound comes with scalar $\kappa(A)^q$ where $\kappa$ is the condition number of $A$ and $q$ the order of the denominator of the rational function.
Strengths: The paper provides an interesting perspective on the performance of Lanczos' algorithm for computing matrix functions.
The main strength is clarity, partly because the result is relatively strong and partly because the presentation is easy to follow.
I appreciate the proof sketch in Section 2.1 and the comparison to prior work in Appendix B (especially in B.2, which I was wondering about while reading the other parts of the manuscript).
I also appreciate the code submission.
Overall, I think this is a nice paper.
Weaknesses: The paper's main contribution is Theorem 1: near-instance optimality of Lanczos' method for computing $f(A)b$.
The weaknesses of this result are twofold:
1. The analysis is limited to exact arithmetic. It is well-known that implementations of Lanczos' method in exact versus finite-precision arithmetic may not match. Focussing on exact arithmetic somewhat limits the practical applicability of Theorem 1.
2. The bound in Theorem 1 includes the constant $\kappa(A)^q$. Similar works (including those discussed in Appendix B) achieve smaller constants, and in practical applications, $\kappa(A)$ can be gigantic. Since the largest $\kappa$ in the experiments seems to be $10^6$ (unless I have missed something), it is slightly unclear whether the bounds actually do explain the superiority of Lanczos' algorithm in practice (where $\kappa$ can exceed $10^6$ by a large margin; see, for instance, the condition number of Gaussian process covariance matrices or discretisations of PDEs).
That said, both weaknesses are openly discussed as limitations.
Furthermore, Krylov methods have gained popularity in machine learning in recent years, as evidenced by the growing number of papers/submissions on Lanczos, Arnoldi, CG, GMRES, etc.
Nevertheless, the submission's connection to existing machine learning literature would be strengthened if at least some of the matrix functions mentioned in the introduction would reappear in the experiments.
Technical Quality: 3
Clarity: 3
Questions for Authors: What follows is not directly a question, but the submission's format appears to deviate from the Neurips style in two ways:
1. The abstract contains two paragraphs, even though the template states, "The abstract must be limited to one paragraph."
2. It seems that the mathematics font has been changed from the original Neurips template.
I am unsure what Neurips' policy is here. Should these two things be changed?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Limitations are acknowledged. (I discuss this under "Weaknesses" above.)
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the thorough and thoughtful comments. This review raises several valuable points:
On the issues of exact vs. floating point arithmetic, please see the global response.
Regarding the leading constant in Theorem 1, it is true that our result is weaker if the condition number of each $A_i$ is large. This means that, as Figure 2 shows, it is sometimes better than existing bounds (Fact 1) and sometimes worse. See the discussion below Theorem 1, at the bottom of page 4. However, as discussed in Section 4, we believe that the dependence on the condition number in the leading constant $C$ can likely be improved significantly with a tighter theoretical analysis, bringing the bound closer to more specialized results (for specific matrix functions), like those discussed in Appendix B.1.
The review notes that it would be valuable for the paper to touch on the matrix functions mentioned in the introduction. While our main focus is rational functions, we do study several of the functions most relevant for machine learning both theoretically and empirically:
- Matrix square root and inverse square root: Analyzed theoretically in Appendix D. Experiments in Appendix E.1. Also discussed below Lemma 1.
- Matrix exponential: For theoretical analysis, see the discussion below Lemma 1. Experiments are shown in the right panel of Figure 1. A rational approximation to it is studied in Figure 2. See the discussion accompanying these figures at the beginning of Section 3.
- Matrix log: Experiments are shown in the right panel of Figure 1. A rational approximation to it is studied in Figure 2. See the discussion accompanying these figures at the beginning of Section 3.
- Matrix sign: This is discussed at the end of Section 3.2. Results are shown in Figure 5 and Figure 6
Thank you for spotting the deviations from the NeurIPS style. These were oversights and we are more than happy to correct them for the revision.
---
Rebuttal Comment 1.1:
Comment: Thank you for iterating!
I agree with your perspective on exact versus finite-precision arithmetic. Nonetheless, assuming exact arithmetic remains a weakness of the analysis (though typical for this type of work).
I also agree with the rebuttal's discussion on the constant in Theorem 1. However, this constant also remains a weakness of the theory, even though there is hope that the bound can be tightened.
Finally, about the matrix function experiments: my review might have been poorly phrased; apologies for that. What it referred to was not that the matrix function was missing but its application. For example, the current experiments include log-determinant estimation in Figure 1, but only on a toy matrix, whereas a case study in Gaussian process regression would strengthen the paper's connection to the ML literature. (Same for the matrix sign or matrix exponential, for example.)
Altogether, I continue to recommend accepting the submission. Thank you again for the reply. | Summary: The authors study computation of matvecs of a matrix function via the Lanczos method, an in particular try to answer the question of why the basic Lanczos method is competitive or even superior to sophisticated implementations targeting specific matrix functions. They provide theoretical bounds on the error of Lanczos methods for rational functions which they use to produce a bound on generic functions based on distance to the nearest rational, and demonstrate this in numerical experiments.
Strengths: The first major contribution of this article lies in the introduction section, which is establishment of the fact that Lanczos methods outperform sophisticated, problem specific peers in some instances. This "theory-practice gap" in and of itself is an important and consequential fact to establish. The numerical experiments they do to demonstrate this are convincing and well communicated.
Subsequently, they offer important theorems governing the ability of Lanczos methods to approximate rational functions, and use this to find a bound based on the l infinity distance between a given function and the nearest rational function of low degree. Numerical experiments demonstrate these results on important practical cases.
The way that the main theorem was sketched as special case in Section 2.1 is an exceptional move from a mathematical communication perspective. I was easily able to follow that specific proof, and it gave me intuition as to why the general case might be true without going through the more extensive proof in the appendix. I wish more articles did this, and I will try to do this in my future articles.
Weaknesses: In practice, we care about the effectiveness of this method in floating point arithmetic. But I think it makes sense to partition that off as a separate project, as the contents of the article as it stands are convincing on their own.
Here are some grammar issues I noticed (did not affect my scoring):
33) functions repeated.
72) an problem instance -> a problem instance
146) the "and <=" may be missing a term.
147 and 148) May be worth clarifying that it is the right hand side of (6) that is being referred to.
Technical Quality: 4
Clarity: 4
Questions for Authors: Congratulations on this paper.
Confidence: 3
Soundness: 4
Presentation: 4
Contribution: 4
Limitations: The theoretical results rely on the poles of the rational function lying outside of the spectrum of the operator which might not always be the case, but for many practical situations it will be, and the authors' assertion that this case be considered future work is convincing to me.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the thorough review! The point about floating point arithmetic is well-taken; see the global response for more. Thank you as well for the grammar corrections. These have all been corrected in our revision. | Rebuttal 1:
Rebuttal: We thank the reviewers for their thoughtful comments. We were pleased with the quality and number of reviews we received, and have taken the feedback into account in order to improve the paper. In our global response we address some common points raised by the reviewers.
Several of the reviewers raised very reasonable questions about the applicability of our bounds to finite precision arithmetic. Generally speaking, the behavior of Lanczos in finite precision arithmetic is significantly more complicated than exact arithmetic. Therefore, bounds for exact arithmetic, such as the ones in this paper, serve as a starting point for a more complete understanding of the algorithm in finite precision arithmetic. We do observe that, typically, Lanczos-FA continues to perform near-optimally, even when run with the standard 16 digits of precision. Concretely, plots like those in Figure 1 and Figure 2 can be reproduced in finite precision. We will consider adding such plots to our revised version of the paper (as suggested by Reviewer cS79) to highlight this point.
We make three additional comments:
- First, we note that reorthogonalization is often used in practice. In this case, the behavior of Lanczos in finite precision is very similar to exact arithmetic and we expect our bounds would carry over with a little work.
- Second, even without reorthogonalization, there is some hope our theory can be applied. In particular, the analysis of Greenbaum (ref [32]) guarantees that the behavior of Lanczos in finite precision arithmetic is equivalent (in a precise way) to the behavior of Lanczos in exact arithmetic on a certain related problem. This allows exact arithmetic theory (e.g., our bounds) to be transferred to finite precision arithmetic. We discuss this in the final paragraph of Section 4. However, the precise equivalence is somewhat complicated. This would be an interesting direction for further work.
- Finally, we note that the analogous optimality guarantees for conjugate gradient in exact arithmetic are generally regarded as very important (even if they do not hold precisely in finite precision), because they provide intuition about how the algorithm works and are predictive of its excellent performance in practice. | NeurIPS_2024_submissions_huggingface | 2,024 | Summary: The goal of this paper is to better understand the performance of using Lanczos for matrix-function times vector computations. Using Lanczos for this task is the de-facto standard, since it works very well (best vs competitors) in practice. Previous theory is very weak: there is a big gap between the bounds it specifies and the actual performance. The goal of this paper is shrink and where possible eliminate this gap.
A stronger bound for Lanczos is established. Extensive experiments are reported.
Strengths: Very well written paper.
The main value of the paper is that it takes a well established method and de-facto standard (using Lanczos for matrix-function times vector computations), but with weak theory, and establishes much stronger theory. The reason this is important is that the weak bound motivated researchers to find better algorithms. Such algorithms had better theoretical bounds, but in practice were not as good as Lanczos. This paper closes the gap, giving stronger theory for Lanczos.
The experimental evaluation is robust. Appendix also contains additional bounds specific for matrix square root and inverse square root.
Weaknesses: - Bounds are theoretic, and only explain what an existing algorithm achieves. There is no new algorithm in the paper.
- The assumption that the poles are outside the gap that contains the eigenvalues seem to imply that the bound works only for a very restricted subset of rational functions. How common is it?
- Bounds are only for exact arithmetic. The picture for finite-precision arithmetic can be very different. Even though I can understand why bound for finite-precision arithmetic are outside the scope for this paper, the authors could have conducted experiments with them to see how much the bounds carry to finite-precision.
- Even though the problem of computing f(A)b has many ML applications, at the core the paper is focused on the quality of a NLA problem (solving f(A)b). It is more a NLA paper than a ML paper.
- In some experiments, in order to show significant gap between the new bound and the old bound (Fact 1), a large number of iteration are used so the error becomes very very small. For example, in Figure 2 (center) the error starts at 10^-5 and goes to 10^-69. In Figure 2 (right) errors start at 10^-10! In applications you will rarely work in these regimes, and will do with much smaller error. It is unclear whether the gap between Fact 1 and the new bound is significant for that regime.
- Lemma 1 seems a bit weak: the multiplicative factor or norm(b) is C_r which can be huge. Sure, it is present in (7), but there the minimum it multiplies presumed to be very small. In Lemma 1 it appears on norm(b), which is constant.
Technical Quality: 4
Clarity: 4
Questions for Authors: - How hard is it to build rational approximation with poles outside the interval of eigenvalues? Can you give a reference to a general result?
- Did you conduct experiments with finite precision to see how much the theory carries over?
Confidence: 3
Soundness: 4
Presentation: 4
Contribution: 3
Limitations: Nothing to add.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the thorough and thoughtful comments! This review raises many valuable points:
Regarding the difference between exact arithmetic and finite precision, please see the global response.
The review asks if it is realistic to build a rational approximation with poles outside the interval of eigenvalues. For many functions (e.g. exponential, $x^{\alpha}$) explicit rational approximations with poles outside of the interval of eigenvalues are known. (See for instance, reference 36 and 62.) For other “nice” (e.g. continuous) functions, one could apply the Remez algorithm to find the best rational approximation with real poles (see reference 63, chapter 24-5). The resulting approximation will not have any pole inside the interval of approximation, because otherwise the approximation would have infinite error near that pole. For less nice functions (e.g. absolute value which has a discontinuous derivative), the best rational approximations actually have *complex* poles which are clustered about discontinuities in the (higher order) derivatives. For instance, for the absolute value, the best rational approximation has poles on the imaginary axis. Our theory does not apply to this case, but this would be one of the most natural classes of functions to explore next.
For experiments, we chose to run Lanczos in extended precision because it allows us to highlight the qualitative behavior of our bound: Theorem 1 captures the correct shape of the convergence curve better than Fact 1. Nevertheless, several of our plots (e.g., the left and center panels of Figure 2) show that our analysis can still significantly outperform Fact 1, even when the desired accuracy or number of iterations is low. For some problems, this is not the case, mainly because of the large prefactor in Theorem 1, which depends on the condition number of $A_1, \ldots, A_q$. Such a prefactor does not appear in Fact 1, so our new bound is only better for a large number of iterations. That said, as discussed in Section 4, we believe the prefactor can likely be reduced with a better theoretical analysis, in which case the improvement of Theorem 1 would kick in sooner.
The review also asks about the multiplicative factor in Lemma 1. It is correct to note that in Equation 8, $\||b\||_2$ is a constant. However, in this term, $C_r$ is *also* multiplied by $\||f - r\||_I$, which denotes the approximation error in the max-norm for the rational function approximation $r$. As the degree of $r$ grows, this goes to zero, often much more quickly than the error of the best polynomial approximation to $f$ on the interval $I$.
---
Rebuttal Comment 1.1:
Comment: Thank you for the answers. I think some form of them should be included in the revised manuscript.
My score was already positive before (7 - accept), and stays the same after reading the rebuttal. | null | null | null | null | null | null |
Efficient Multi-task Reinforcement Learning with Cross-Task Policy Guidance | Accept (poster) | Summary: The paper presents Cross-Task Policy Guidance (CTPG), a novel framework designed to improve multi-task reinforcement learning (MTRL) by leveraging cross-task policy similarities. CTPG trains a guide policy for each task to select the most suitable behavior policy from a pool of all tasks' control policies. This method aims to generate better training trajectories and enhance learning efficiency. The authors propose two gating mechanisms: a policy-filter gate to filter out non-beneficial control policies and a guide-block gate to block unnecessary guidance for mastered tasks. Empirical evaluations on manipulation and locomotion benchmarks demonstrate that integrating CTPG with existing parameter sharing approaches significantly enhances performance.
Strengths: ### S1. Innovative Approach
The paper introduces a novel method for leveraging cross-task similarities in MTRL. By training a guide policy to select behavior policies from a pool of tasks, the approach provides a direct and efficient way to exploit shared skills, which is an underexplored area in MTRL.
### S2. Empirical Validation
The proposed framework is validated through extensive experiments on various benchmarks, including MetaWorld and HalfCheetah. The results demonstrate significant improvements in learning efficiency and performance when CTPG is integrated with existing MTRL approaches.
### S3. Clear Presentation
The paper is well-structured and clearly explains the CTPG framework, including detailed descriptions of the guide policy and gating mechanisms. Figures and tables effectively illustrate the benefits of the proposed method, and the provided pseudocode enhances reproducibility.
Weaknesses: ### W1. Scalability Concerns
The scalability of CTPG to more complex and large-scale environments is not thoroughly discussed. While the method shows promising results in the tested benchmarks, a broader analysis of its scalability and practical utility in more complex scenarios may improve the significance of this work.
### W2. Computational Complexity
The computational complexity of training and deploying CTPG, particularly the guide policy and the gating mechanisms, is not explicitly addressed. Understanding the computational requirements and potential limitations in terms of resources and execution time would provide a more comprehensive evaluation of its applicability.
Technical Quality: 2
Clarity: 3
Questions for Authors: ### Q1. Scalability to Complex Environments
How does the CTPG framework scale to more complex, large-scale environments? Are there any specific challenges or limitations that need to be addressed for practical deployment in such scenarios?
### Q2. Computational Requirements
What are the computational requirements for training and deploying CTPG? How does the method perform in terms of execution time and resource consumption compared to existing MTRL methods?
Confidence: 4
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: The authors acknowledge the limitations related to the predetermined guide step K and the reliance on specific benchmarks. However, a more detailed discussion on potential negative societal impacts and strategies to mitigate them would be beneficial.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you very much for your constructive feedback and acknowledgment of our efforts. Following are our responses to all your concerns.
> Scalability to complex environments.
We have tested CTPG on MetaWorld and HalfCheetah, the commonly used and authoritative benchmarks in the MTRL community, covering two main fields of manipulation and locomotion. In both two environments with diverse task settings, CTPG has shown outstanding performance, which demonstrated that CTPG can effectively learn policy sharing between tasks, thus improving the sample efficiency of MTRL. In addition, CTPG is a general MTRL framework that can be adapted to other MTRL environments and other RL algorithms. We conducted experiments using another base RL algorithm TD3 in **Fig.3 of the global Rebuttal Comment PDF**, and the results demonstrate that CTPG is equally applicable to other algorithms to enhance sample efficiency. CTPG can also be migrated to other MTRL benchmarks without additional configuration and deployment. If you have a recommendation for another benchmark, we'd be happy to try it on there too :)
> Computational requirements and computational complexity.
We provided detailed information about the computational resources used in our experiments in Appendix E.1. For computational complexity, we show the training curves with the x-axis being wall-clock time on the MetaWorld-MT10 environment in **Fig.4 of the global Rebuttal Comment PDF**. The results indicate that although CTPG takes longer to train, it still outperforms baselines for the same training time. Furthermore, during the evaluation and deployment phase, CTPG's guide policy will no longer be used, so no additional execution time and storage space is required.
> Discussion on potential negative societal impacts.
We provided Broader Impacts in Appendix F. CTPG is a general MTRL framework and does not introduce additional social impacts that need to be discussed. If you feel there are any necessary social implicates that need to be discussed, welcome to point them out, and we will add discussions about them in the revised manuscript.
**Thank you once again for your valuable feedback. We hope our response has satisfactorily addressed your concerns. If you find that we have addressed your concerns, we kindly hope you reconsider your rating. If you need further elaboration or additional points to include in the response, we welcome further discussion to ensure everything is clear and satisfactory.**
---
Rebuttal Comment 1.1:
Title: Response to authors rebuttal
Comment: I appreciate the authors for the detailed response. Most of my concerns are addressed therefore I'm raising my assessment.
---
Reply to Comment 1.1.1:
Comment: Thank you very much for reviewing our response. We are pleased that our response has effectively addressed your concerns. Once again, we sincerely appreciate your thorough review and insightful feedback.
Best regards,
All authors | Summary: This paper tackles the problem of multi-task reinforcement learning. Previous works have primarily focused on MTRL through specialized network structures or methods to resolve conflicting gradients. This paper considers the problem from an orthogonal perspective, by actively sharing control policies for each task adaptively, in the hope that sometimes the policy of one task can guide the exploration of other tasks. Specifically, similar to in hierarchical RL, a high-level policy is used to decide which “task policy” to call at each particular state, where “bad” policies are masked out through a learned value function. Evaluated on Metaworld and HalfCheetah MTRL benchmark, the method shows improved performance compared to previous works.
Strengths: - The idea of exploring with cross-task policies to facilitate multi-task reinforcement learning is interesting and novel.
- The method is thoroughly evaluated on a set of test domains, and show superior performance compared to previous methods
- This paper is well written, and the method is easy to follow.
Weaknesses: - The proposed method contains many floating pieces that seem to add burden both to implementation and hyperparameter tuning. Some of the pieces seem rather like hack than a principled solution. For example:
- For the hindsight off-policy correction, the action of the guide policy is relabelled to the one that is most likely to generate the sequence of actions. However, it should be likely that none of the control policies can generate the old sequence of actions with high probability, in which case relabeling will not help at all.
- The action space masking is essentially masking out control policies with low Q values. Shouldn’t this already be reflected in the policy through policy gradient updates? Why do we need additional masking?
- Using the temperature coefficient of SAC as an indication of the policy performance is very empirical and may not always be true. This also makes the proposed method specific to adaptive temperature SAC.
- It seems that the hindsight off-policy correction would require much more forward passes than a regular SAC. Can the author show the performance of the proposed method with x axis being wall-clock time?
- For the proposed method to be effective, we need the policy of some other tasks to have better performance than the policy that is trained for the current task. One scenario I can think of is when the multiple tasks secretly form a curriculum (e.g. one task is the prerequisite for another task), which might be the case in Metaworld and halfcheetah. But I’m not sure how realistic such an assumption is in the real world.
Minors:
- Line 207: dose → does
- “someone who can ride a bicycle can quickly learn to ride a motorcycle by referring to related skills” → I don’t think that’s actually the case…
Technical Quality: 3
Clarity: 3
Questions for Authors: See weaknesses
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: The limitations are adequately addressed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you very much for your constructive feedback and acknowledgment of our efforts.
> Floating pieces
1. Hindsight off-policy correction: On the one hand, it is deeply integral to our CTPG framework, instead of just a floating piece. It addresses the unavoidable non-stationarity issue in off-policy training. We demonstrate this in Fig 5c and 11c, where it is helpful and its effect is more pronounced with more tasks, reinforcing its necessity within CTPG. On the other hand, it seems that there is a misunderstanding on the reason for proposing this module. The issue you mentioned does indeed exist and is still a matter of interest for the community. In such case, the data is invalid for training because none of the policies can regenerate this old sequence, so the hindsight off-policy correction won't work either. However, our module is not proposed to solve this issue and has its own motivation. It should not be considered a floating piece merely because it does not address a problem outside its intended scope.
2. Action space masking in the policy-filter gate: The integration of this module with CTPG is also essential, which avoids negative policy guidance from unhelpful control policies. Although the guide policy can also be learned by gradient updates only, the policy-filter gate serves to expedite this process by narrowing down the candidate policies, especially when the number of tasks is particularly large, thus further reducing the exploration space and improving sample efficiency. As shown in Fig 5a and 11a, training efficiency will be significantly reduced without this gate. Here we want to emphasize that although the functionality of this gate can be achieved by policy gradient update, our module can significantly improve training efficiency.
3. Temperature coefficient of SAC in the guide-block gate: We aim to use it to filter tasks that have been mastered or converged on. The choice of SAC and its adaptive temperature mechanism was made due to its robust performance in various RL tasks. We want to emphasize that CTPG is a general MTRL framework that can be adapted with other RL algorithms. We explore the adaptation of TD3 in **Fig.3 of the global Comment PDF**. Using the temperature of SAC as an indicator is indeed an empirical choice. We admit it may not always be true, thus we provide several alternatives not specific to SAC. For example, we try an intuitive metric success rate in Fig 5b and 11b, which shows fair performance with SAC temperature. Besides, policy entropy (in algorithms like PPO) or cumulative rewards (when there is no significant change) can also perform as indicators. The guide-block gate serves as an essential module with various implementations, and we provide several alternatives and welcome further study based on ours.
The three modules collectively contribute to share effective policies between tasks, thus improving the sample efficiency, which is the key goal of MTRL. In addition, these three modules do not involve any hyperparameter settings, so there is no additional burden for tuning. Hope our response can change your initial perception that they were just floating pieces.
> Forward time of hindsight off-policy correction and performance with wall-clock time
Compared to the gradient update process, hindsight off-policy correction requires only forward computation, making the time consumed negligible. In addition, the calculation of each control policy's probability for generating the historical action sequence can be parallelized. This is achieved by extending the input of the actor from [batch\_size] to [batch\_size, task\_num], where the second dimension distinguishes the different task representations.
**Fig.4 of the global Comment PDF** shows the training curves with wall-clock time on MetaWorld-MT10. Although CTPG takes longer to train, it still outperforms baselines for the same training time.
> Scenarios that CTPG works
CTPG does not require that a task have to be a sub-strategy of another task. If tasks can naturally form a curriculum as you mentioned, CTPG definitely can learn an effective policy. Even if other tasks' control policies do not perform as well on the current task, CTPG's guide policy can still facilitate an effective mixed policy. In specific, the agent can choose different control policies at different stages within a single episode, as shown in Fig 4 in the paper. In other words, the policy guidance uses a learnable mixture policy consisting of all control policies. Furthermore, referring to Reviewer xTD7's suggestion, we set up a new baseline called BPT (Best Performance Transfer), which selects the best-performing policy in the current task among all task policies to guide the exploration. The results shown in **Fig.1 of the global Comment PDF** demonstrate that CTPG also outperforms BPT. This is because CTPG's guide policy learns a more flexible strategy, rather than simply relying on a single best-performing policy for guidance.
Here we extend the earlier example involving bicycle and motorcycle for better understanding. Suppose an agent is learning four tasks: unicycle(U), bicycle(B), motorcycle(M), and automobile(A). They do not form a curriculum and need to be learned simultaneously. (B) and (M) share the skill balancing on two wheels. (U) and (B) share the skill pedal-driven, while (M) and (A) share the skill ignition to move. With CTPG, if the agent has learned the pedaling skill in (B), it can transfer this skill to (U), even if it hasn’t yet mastered balancing. Similarly, if the agent has learned balance in (B) and ignition in (A), it can quickly master (M) by combining these two skills.
> Minor errors
Thank you for pointing out the typo "dose", we will correct it in the revised manuscript.
**Thank you once again for your valuable feedback. If you need further elaboration or additional points to include in the response, we welcome further discussion to ensure everything is clear and satisfactory.**
---
Rebuttal Comment 1.1:
Comment: I thank the authors for the clarifications. I will maintain my original score.
---
Reply to Comment 1.1.1:
Comment: We greatly appreciate your time. If there are any remaining issues or concerns that have not been addressed, please let us know. We would be more than happy to engage in further discussion to resolve them. If all concerns have been addressed, we kindly hope you can reconsider the rating assigned to our submission.
Thank you once again for your valuable input.
Best regards,
All authors | Summary: This paper addresses the problem of multi-task reinforcement learning (MTRL). To this end, the paper proposes a method that selectively shares behaviors from the policies learning to solve other tasks. The experiments conducted in a locomotion domain (multi-task Half-Cheetah) and a robot arm manipulation domain (Meta-World) verify that the proposed method can improve the performance of the learned policies. Ablation studies justify the effectiveness of many proposed components, including the policy-filter gate, the guide-block gate, and the hindsight correct. Yet, I am not convinced that the proposed method could improve the sample efficiency, which is the key goal of MTRL. Also, it seems that the proposed method is incremental and lacks novelty. Therefore, I am leaning toward rejecting this work in its current form.
Strengths: **Motivation and intuition**
- The motivation for sharing behaviors for MTRL is convincing.
**Experimental results**
- The main experimental results show that the proposed method achieves better converged performance compared to the baselines in MT Half-Cheetah and Meta-World.
**Ablation study**
- The ablation studies justify the effectiveness of the policy filter gate, guide block gate, and correct hindsight.
**Reproducibility**
- The code is provided, which helps understand the details of the proposed method.
Weaknesses: **Main results (Table 1)**
- Table 1 and Section 5.2 present quantitative results of converged performance. However, as far as I am concerned, the comparisons of MTRL should focus on sample efficiency, i.e., how fast each method can learn, instead of converged performance. Therefore, I am not convinced by this evaluation.
**Comparison to k-step QMP**
- The paper states that the proposed method outperforms QMP because "guide policy learns long-term policy guidance." I am not entirely convinced. One can simply use QMP to select from k-step behavior proposals and execute the selected proposal, which can also achieve this long-term policy guidance. Including this variant of QMP would be necessary to show that the performance gain comes from other designs of the proposed method.
**Guide step K**
- The best guide step K in the MT Half-Cheetah and Meta-World are different. It is unclear, given a new MTRL domain, how we should choose K. It seems that we could only tune this hyperparameter via trial and error.
**Backbone RL algorithms**
- This work adopts SAC as the backbone RL algorithm. Is it possible to use other RL algorithms, such as TD3 or PPO?
**Clarity**
- Section 4 is difficult to follow. Sufficiently describing the intuitions before introducing each component of the proposed method would significantly improve the readability of this section.
**Novelty**
- The proposed method seems incremental given that the QMP paper explores this idea of behavior sharing. Despite the high similarity between this work and the QMP paper, the authors seem to deliberately hide this by avoiding discussing QMP in the introduction.
Technical Quality: 2
Clarity: 2
Questions for Authors: See above
Confidence: 5
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you very much for your constructive feedback and acknowledgment of our efforts. Following are our responses to all your concerns.
> Sample efficiency comparison.
Please refer to **Figure 10 (Appendix D.1)**, which presents the full training curves for our main experiment (Table 1). It shows that beyond the ultimate performance improvement, CTPG also enhances the sample efficiency.
> Comparison to K-step QMP.
In our experiments, we use the hyperparameter settings from the original QMP paper. Thus, the QMP in our experiment is exactly the K-step QMP ($K$ = 10 in implementation) you asked for, which as you mentioned, shows that the performance gain comes from other designs of the proposed method. We are sorry for the ambiguity and we will add this hyperparameter setting to the Appendix in the revised manuscript. Thank you again for pointing this out.
> Hyperparameter guide step $K$.
Guide step $K$ is a necessary hyperparameter introduced by CTPG and may need slight adjustment across testbeds. However, we want to emphasize that the improvement gains of CTPG do not rely on the careful tuning of $K$ at all. As shown in **Fig.2 of the global Rebuttal Comment PDF** (a clear version of Figure 12), the performance improvement is evident even without tuning $K$. If the user pursues the potential best performance, we recommend exploring the value of $K$, and it can be selected easily with the help of hyperparameter search methods, such as Bayesian Optimization, Hyperband, etc. In the Conclusion and Discussion Section, we have already indicated that our future work will focus on dynamically selecting the guide step $K$.
In addition, the output of CTPG's guide policy can add another action dimension to choose a specific guide step $K$. This approach not only automatically selects the $K$ value in different environments but also dynamically selects it at different time steps. We already have preliminary experimental results demonstrating the effectiveness of this scheme, and it also shows the high scalability of the CTPG framework. However, we believe that the contribution could be a new work, so it is mentioned only in future work.
> Adaptability of other RL algorithms to CTPG.
CTPG is a general MTRL framework that can be adapted with other RL algorithms, *even allowing the control and guide policy to use different RL algorithms*. Since SAC is a widely used algorithm in continuous control and serves as the base algorithm for all baselines, we also choose SAC as the base algorithm in this work. In addition, we explore the adaptation of TD3 with the CTPG and also employ different RL algorithms for the control and guide policies. Specifically, the control policy uses TD3, and the guide policy uses DQN. This approach is compared with the TD3-based MTRL method, and the result is displayed in **Fig.3 of the global Rebuttal Comment PDF**. The result shows that CTPG can enhance performance and sample efficiency when combined with other backbone RL algorithms.
> Section 4 is difficult to follow.
Thank you very much for your suggestion. We will include an additional intuition before introducing each module to enhance readability in the revised manuscript.
> Similarity with QMP.
We highly respect QMP and believe that QMP is an excellent work that greatly inspired our research. We discussed their differences in Related Work Section. Moreover, QMP serves as the most important and the only baseline in our experiments, and we openly compared our method against it. Following your suggestion, we will include a more detailed introduction of QMP and its great contribution in the revised Introduction.
Here, we would like to re-emphasize the difference between CTPG and QMP: QMP uses the maximum Q-value of a single step to select the shared behavior over continuous K steps, which *only* guarantees the optimality of the first step of the shared policy. In contrast, CTPG learns a guide policy with the action space being exactly the task control policies to identify useful sharing policies for guidance by considering the benefits over K steps collectively. Additionally, CTPG proposes two gating mechanisms to avoid negative transfer from unhelpful policies, thereby improving sample efficiency.
**Thank you once again for your valuable feedback. We hope our response has satisfactorily addressed your concerns. If you find that we have addressed your concerns, we kindly hope you reconsider your rating. If you need further elaboration or additional points to include in the response, we welcome further discussion to ensure everything is clear and satisfactory.**
---
Rebuttal Comment 1.1:
Title: Re: Rebuttal by Authors
Comment: Thank you for the rebuttal, including the additional TD3 results and clarifications. I am increasing my score to 5 to reflect what's addressed by the author's rebuttal.
---
Reply to Comment 1.1.1:
Comment: We sincerely thank you for taking the time to read our response. We are pleased that our response has effectively addressed your concerns. Thank you once again for your valuable input.
Best regards,
All authors | Summary: This paper proposes a method called CTPG to enable policies trained on different tasks to learn from each others' generated trajectories. CTPG operates by learning a guideline policy that determines which control policy in a given set should best generate trajectories to enable agents to learn a particular task. Then, the authors also designed additional mechanisms that (i) deal with how each policy whose trajectories are sampled may change from their learning process, (ii) prevent negative transfer from using trajectories from irrelevant policies for learning, and (iii) promote the convergence towards good policies by preventing any transfer of experience once a policy is already proficient at its intended task.
Using environments from multi-task reinforcement learning benchmarks, the authors then investigated their method's performance when (i) learning a set of tasks in parallel and learning a new task that is not from the set of previously learned tasks. Furthermore, the authors also conducted further analysis to investigate (i) which component within their learning algorithm is most responsible for the method's performance and (ii) whether the guideline policy learned a sensible experience selection policy.
Strengths: **Major Strength - Clarity**
Except for a few minor clarification details provided below, I find the paper to be well-written. I especially appreciate the clear outlining of (i) the motivation behind the proposed method, (ii) the questions being investigated by the authors, and (iii) the experiment design. I hope that future iterations of the manuscript will keep this same level of clarity.
**Major Strength - Method Soundness**
From a high-level perspective, it seems that the learned transfer learning mechanism proposed in this paper seems reasonable. I especially find the authors' design of various mechanisms to address different problems (i.e., policy selection, negative transfer, variable rates of learning between tasks) to be great choices that should improve learning performance in multi-task reinforcement learning scenarios.
**Major Strength - Experiments and Analysis**
I also think that the authors did a great job of thoroughly investigating the efficacy of their method. The five questions in the experiment section provided interesting insights into the method. At the same time, the experiments done to answer each of those questions were well-designed and showed the method's positive performance.
**Minor Strength - Significance**
While allowing agents to learn from each others' experience to improve their performance at achieving their respective objectives is not exactly new, to my knowledge, the authors' proposed method for learning in multi-task reinforcement learning (and addressing its various associated issues) is novel. Even if I am wrong on the novelty aspect of this method, I still believe the thorough analysis provided by the authors would contribute to different insights that may be useful for the multi-task RL community.
Weaknesses: **Minor Weaknesses - Additional Comparisons**
Perhaps another baseline that could be compared is the simple strategy from [1], where one simply learns from another policy with the highest performance at the task. Using this baseline should elucidate the effects of switching between different policies to transfer from as opposed to just transferring from a policy that seems to perform the best.
**Minor Weaknesses - Hindsight Off-Policy Correction**
From Section 4.1 alone, I also do not find how the hindsight off-policy correction mechanism affects the remainder of the learning process. While I did check Algorithm 2 in the appendix, I am uncertain why $j^{'}_{t}$ (the behavioral policy chosen in hindsight) only affects SAC's critic updates and not also the policy updates.
References:
[1] Disentangling Transfer in Continual Reinforcement Learning. Wolczyk et al. NeurIPS 2022.
Technical Quality: 2
Clarity: 3
Questions for Authors: 1. How would CTPG perform against a simple transfer learning strategy where experience is only transferred from the policy that has the best performance?
2. Does the behavioral policy chosen in hindsight affect the actor-network training? If it does, how would it do so?
Confidence: 3
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: The authors have sufficiently outlined their method's limitations and interesting directions for future work in the last paragraph of the manuscript.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you very much for your constructive feedback and acknowledgment of our efforts. Following are our responses to all your concerns.
> Additional comparison with a simple transfer learning strategy where experience is only transferred from the policy that has the best performance.
[1] focus on continual RL, requiring a predefined task learning sequence, which does not match the experimental setting of MTRL, so we cannot directly compare it with CTPG and other baselines. Therefore, we implement a comparable version based on the core idea of [1] and your description: selecting the policy that performs best in the current task among all task policies to guide exploration in generating trajectory data. Specifically, after each $H$ rounds of data collection, we evaluate all policies across all tasks and choose the best-performing policy for each task to collect the data for the next $H$ rounds. We refer to this baseline as BPT (Best Performance Transfer) and set $H$ to 50 episodes in implementation. We use MTSAC in HalfCheetah-MT8 and MHSAC in MetaWorld-MT10. The result, shown in **Fig.1 of the global Rebuttal Comment PDF**, demonstrates that CTPG outperforms BPT. CTPG's guide policy learns a more flexible strategy: it not only can learn to share a single policy within a complete trajectory (like BPT), but also can develop a mixture policy by combining different control policies, making it more transferable between tasks. Notably, BPT performs better in HalfCheetah-MT8 than in MetaWorld-MT10 because the tasks in HalfCheetah-MT8 share more significant similarities, whereas the policy transfer and guidance in MetaWorld-MT10 require combination strategies.
[1] Disentangling Transfer in Continual Reinforcement Learning. Wolczyk et al. NeurIPS 2022.
> The effect of the hindsight off-policy correction mechanism on SAC's actor network training.
The hindsight off-policy correction mechanism affects not only the critic update in SAC but also the actor update.
Although the actor loss (Line 4 in Algorithm 2) does not explicitly use relabeled $j'_t$, it indirectly affects actor learning due to SAC's unique actor update method [2]. Specifically, SAC's actor loss is derived from its optimization objective, and the actor policy is updated according to:
$
\pi_{new} = \arg\min_{\pi' \in \Pi} D_{KL} \left( \pi'(\cdot | s_t) \left\| \frac{\exp\left(\frac{1}{\alpha} Q^{\pi_{old}}(s_t, \cdot) \right)}{Z^{\pi_{old}}(s_t)} \right. \right),
$
where the partition function $Z^{\pi_{old}}(s_t)$ normalizes the distribution. In essence, SAC's policy is updated by fitting the distribution of the SoftMax function of Q-value with temperature $\alpha$. Therefore, modifying the update of $Q(j_t|s)$ to $Q(j'_t|s)$ using hindsight off-policy correction leads to a different actor optimization objective, thus affecting the training of the actor network. We will also elaborate on how the hindsight off-policy correction mechanism affects guide policy training in the revised manuscript.
[2] Soft Actor-Critic: Off-Policy Maximum Entropy Deep Reinforcement Learning with a Stochastic Actor. Haarnoja, Tuomas, et al. ICML 2018.
**Thank you once again for your valuable feedback. We hope our response has satisfactorily addressed your concerns. If you need further elaboration or additional points to include in the response, we welcome further discussion to ensure everything is clear and satisfactory.**
---
Rebuttal 2:
Title: Official Comment from Reviewer xTD7
Comment: Thank you for considering attempting to address the points I've previously raised.
> Specifically, after each $H$ round of data collection, we evaluate all policies across all tasks and choose the best-performing policy to collect the data for the next $H$ rounds. We refer to this baseline as BPT (Best Performance Transfer)
This is exactly the baseline that I envisioned in my previous feedback. Given the positive results when comparing against this baseline, I no longer view the lack of comparisons against BPT as a minor weakness of this paper.
> Although the actor loss (Line 4 in Algorithm 2) does not explicitly use relabeled $j'_t$, it indirectly affects actor learning due to SAC's unique actor update method [2].
I also agree that this needs to be better highlighted in the next manuscript version. Perhaps it's also important to note that unlike SAC-based actor updates in environments with continuous action spaces (where we rely on a Monte Carlo estimate of the KLD objective since integrating over each possible action is impossible), the KLD objective can be easily evaluated when the input distributions to the KL divergence are categorical.
**Closing remarks**
Given the authors' responses to the points raised by each reviewer, I am increasing my score because I believe this is a good paper worthy of acceptance.
---
Rebuttal Comment 2.1:
Comment: Thank you very much for taking the time to review our response. We are glad that our response has effectively addressed your concerns. Once again, we sincerely appreciate your thorough review and insightful feedback, which have helped us enhance our paper.
Best regards,
All authors | Rebuttal 1:
Rebuttal: To AC and all the reviewers:
We would like to express our sincere gratitude to AC and all the reviewers for their great efforts in evaluating our paper. Your valuable insights and suggestions are greatly appreciated. We have carefully addressed all the questions and concerns raised in the reviews through our rebuttal comments.
The attached PDF to this global rebuttal comment contains:
* Figure 1 is the experimental results with an additional baseline BPT (Best Performance Transfer).
* Figure 2 is an extended version of the ablation study on guide step $K$ (Figure 12 in paper), which includes an additional curve base MHSAC for convenient comparison.
* Figure 3 is the additional experimental results of adapting CTPG to the base RL algorithm TD3.
* Figure 4 is the experiment results on MetaWorld-MT10 with x-axis being wall-clock time.
If there are any remaining queries or uncertainties after reviewing our responses, we welcome further discussions during the upcoming phase. Your continued engagement is highly valued and appreciated. Thank you once again for your time, expertise, and contribution to our paper.
Pdf: /pdf/3cb08eab0bd3bd31c516ac91df7fd833a98de53e.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Conformal Classification with Equalized Coverage for Adaptively Selected Groups | Accept (poster) | Summary: This paper presents a conformal inference method to assess uncertainty in classification by generating prediction sets with valid coverage based on adaptively chosen features. Falling between marginal and strictly conditional coverage, the features in the proposed method are adaptively selected to address potential model limitations or biases, balancing the need for informative predictions with ensuring algorithmic fairness.
Strengths: - The paper is well written, with good organization, clear problem statement and easy-understanding method description.
- The proposed fairness notion is novel and critical for real-world application. It provides a feasible solution to migitage the tradeoff between fairness and efficiency. It is also impressing that the method works with small sample size.
- The experiments and theorem are comphrensive and supportive of method and claims.
Weaknesses: - It seems that *adaptive equalized coverage* requires each group to surpass a given coverage rate. Desirably like RAPS, the coverage rate should be the same at desired level to guarantee exact demographic parity.
- Though a little out of scope, this paper would be better to provide performance evaluation with presence of distribution shift, especially label shift and shift in protected attributes.
Technical Quality: 4
Clarity: 4
Questions for Authors: - Would your method preserve satisfy more rigorous fairness (Q1) with better design? I think this may be related with union of sub-intervals. Adaptively adjust the coverage rate of subgroup may help.
- Also, the union of sub-intervals may make less sense in some real-world applications. Say, in some data, the target variables are some categorical and also ordered labels, say level 1~5. Predicting a sample to be either level 1 or 5 is confusing. Have this phenomenon be considered and addressed in your paper?
Confidence: 3
Soundness: 4
Presentation: 4
Contribution: 3
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your careful review and insightful feedback.
## Seeking Different Fairness Criteria
The potential for our method to be adapted based on different fairness criteria, going beyond our notion of selectively equalized coverage, is indeed both intriguing and promising. Although exploring those questions goes beyond the immediate scope of this paper, we hope that others will be inspired to improve and build upon our work in this direction. We will definitely add this suggestion in the revised manuscript.
## Classification with Ordered Labels
In our current framework, we have focused on predicting categorical (unordered) labels. If the labels are ordered, an easy modification of our method would require only a small change to Equation (8). Specifically, when the target variables are ordered labels (say level 1 $<$ level 5), then $\hat{C}(X_{n+1})$ becomes the discrete convex hull of all its components on the right-hand side in Equation 8. Because the prediction set resulting from taking unions will be a subset of the prediction set resulting from taking discrete convex hulls, the new prediction set will automatically satisfy the adaptive equalized coverage defined in Equation (3). An intriguing question, which may be investigated in future work, is whether it is possible to devise a better approach for ordinal labels. Thank you for this helpful comment. We will include this discussion in the revised paper. | Summary: The paper introduces a conformal inference method to assess uncertainty in classification by generating prediction sets with valid coverage, conditional on adaptively chosen features. These features are selected to address model limitations or biases, balancing efficiency and fairness by ensuring equalized coverage for sensitive groups. The paper demonstrates this method's validity and effectiveness on both simulated and real datasets.
Strengths: 1. The method efficiently identifies and addresses algorithmic biases, ensuring fair treatment of sensitive groups without sacrificing informativeness.
2. AFCP provides a practical compromise between efficient, informative predictions and algorithmic fairness by adjusting prediction sets for sensitive groups.
3. Demonstrated effectiveness on both synthetic and real-world datasets, outperforming traditional methods in terms of both fairness and prediction informativeness.
Weaknesses: 1. Limitation on Sensitive Attribute Selection: The current method may not always identify the most relevant sensitive attribute, especially with limited sample sizes or overlapping biases. The ‘Automatic Attribute Selection’ section is somewhat challenging to follow. For instance, in Equation 6, it seems that only the argmax element is included in the set, and the algorithm does not seem to provide a sensitive attribute with formal guarantees. Since much of the paper’s contribution hinges on this algorithm, the lack of clarity in its description makes it hard to be convinced of its effectiveness.
2. Sample Size Sensitivity: AFCP’s performance can be constrained by the sample size. Smaller sample sizes may result in less reliable attribute selection and less informative prediction sets. In practical datasets, sample sizes tend to be small when many features are selected. The Sensitive Attribute Selection algorithm may lead to a large label set, resulting in too few samples to accurately determine the thresholds.
3. Computational Complexity: With a large label set, such as in ImageNet which has 1000 labels, this method involves complex procedures for attribute selection and prediction set construction. This complexity may make the method computationally intensive.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Coverage for Occupation 1 in Figure 5: The coverage for Occupation 1 in Figure 5 shows a significant increase in the Marginal method when the sample size increases from 2000 to 5000. This rapid increase is puzzling, particularly since the average size of the prediction set also decreases in this interval. Could you clarify why this behavior occurs? It seems counterintuitive, as one would generally expect a consistent relationship between coverage and prediction set size.
2. Impact of Calibration Set Size on Sensitive Attribute Selection and Final Prediction Set: How do you think the calibration set size influences the algorithm for Sensitive Attribute Selection and the final prediction set? Specifically:
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: Please refer to the weakness.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your detailed review, which allows us to clarify key points and address potential misunderstandings. We believe these clarifications will enhance the paper's accessibility.
## Clarification on Method Aims
While other reviewers have praised the paper for its clarity, we appreciate the opportunity to refine our explanations and address your concerns. Specifically, we would like to clarify the role of the attribute selection component in our method, and the reason why we provide formal guarantees for the coverage of the prediction sets rather than for the "correctness" of the selection procedure.
This paper addresses the problem of constructing prediction sets with guaranteed coverage conditional on adaptively selected sensitive attributes. As you noted, our goal is to bridge the gap between existing approaches for marginal and group-conditional conformal prediction, finding a practical balance between efficiency and fairness.
To obtain reliable inferences and approximately maximize conditional coverage, our method seeks to identify the attribute (or attributes) most likely to result in significant under-coverage under a standard (marginal) conformal inference approach. The specific selection algorithm proposed in this paper has demonstrated strong practical performance, as evidenced by our numerical experiments (see Figure 4 and the new Figure 2 in the PDF supplement accompanying this response). However, the core ideas of our method are flexible enough to accommodate different selection algorithms. In the revised manuscript, we will better highlight this flexibility.
A consequence of this flexibility and "assumption-lean" setup is that establishing formal guarantees about the selection of the "correct" attribute would be challenging and somewhat limiting. It would require stronger assumptions, and it would necessitate a theoretical analysis heavily dependent on the specific implementation details of the selection algorithm. While this may be an interesting direction for further research, potentially strengthening the connection to Cherian and Candès (2023), it falls outside the scope of this paper.
## Impact of Calibration Sample Size
Constructing informative prediction sets with high conditional coverage is more challenging with smaller sample sizes. This is an inherent challenge, not a specific limitation of our method, and it explains why our experiments focus on comparing the performance of our method against several benchmarks across various sample sizes. These experiments demonstrate that our method consistently performs well across all sample size scenarios. For detailed results, see Figures 3–5 in the paper and the new Figures 1–3 in the PDF supplement accompanying this response.
For example, in the experiments pf Figures 3 and 4, when the sample size is as small as 200, it is inherently challenging to: (1) fit an accurate predictive model, (2) assess conditional coverage, (3) and reliably identify the sensitive attribute associated with the lowest coverage. This is reflected by the relatively large size of the prediction sets output by all methods and by the relatively large discrepancies between the nominal and empirical conditional coverage levels. Nevertheless, our method manages to frequently select the "correct" sensitive attribute, achieving significantly higher conditional coverage compared to the standard "marginal" benchmark, with only a slight increase in prediction set sizes. Further, as the sample size increases, our method becomes highly effective at identifying the attribute with the lowest conditional coverage, as illustrated in Figure 4. This allows our method to achieve high conditional coverage with relatively small prediction sets. Overall, these experiments demonstrate that our method offers distinct advantages over existing approaches in both large-sample and small-sample settings.
## Applicability and Computational Cost
Our proposed AFCP method is quite broadly applicable and can be efficiently implemented in many classification settings where the number of labels is not exceedingly large. Natural applications include predicting recidivism risks, determining graduation classes, and addressing binary classification problems such as spam detection and credit default risk assessment.
However, in scenarios where the number of possible classes is extremely high, it is true that the computational cost of our method may become a limiting factor. While addressing this issue is beyond the scope of this paper, we believe that extending our method to accommodate extreme classification tasks presents valuable opportunities for future research. In the revision, we will highlight these opportunities and cite relevant works in the conformal inference literature, such as [1]. These references will provide additional context and potential inspiration for future developments.
Reference: [1]"Class-Conditional Conformal Prediction with Many Classes." Ding et. al., NeurIPS 2023.
## Clarifications on Figure 5
The behavior observed in Figure 5 can be easily explained. The horizontal axis represents the cumulative sample size, which includes both training and calibration samples. As the cumulative sample size increases, more data becomes available to train a more accurate predictive model. This increased accuracy explains why the conditional coverage for Occupation 1 improves while the average prediction set size decreases: the model becomes more precise for all individuals, including those most affected by algorithmic bias.
The main takeaway from Figure 5 is that our method consistently provides relatively informative and reliable predictive inferences compared to other conformal inference approaches, regardless of the amount of available data or the varying accuracy of the underlying machine learning model. Note that the model varies for different sample sizes but remains the same for all conformal inference methods, ensuring a fair comparison.
---
Rebuttal Comment 1.1:
Comment: Thank you for the thorough response, I raise my rating to 5. | Summary: The paper presents a novel conformal inference method aimed at generating prediction sets with valid coverage, conditional on adaptively chosen features. This method is intended to address the dual concerns of efficiency and algorithmic fairness by ensuring equalized coverage for the most sensitive groups, thus providing informative predictions while maintaining fairness. The proposed approach, termed Adaptively Fair Conformal Prediction (AFCP), is validated on both simulated and real datasets.
Strengths: 1. The paper addresses a significant and timely problem in machine learning—ensuring fairness and reliability in prediction sets. The introduction of AFCP, which dynamically adjusts for biases in a data-driven manner, is an innovative and practical contribution.
2. The paper provides a strong theoretical basis for AFCP, clearly defining adaptive equalized coverage and offering proofs to support the validity of the method.
3. The empirical results are robust, covering synthetic and real-world datasets. The comparisons with other benchmarks are comprehensive, demonstrating the practical benefits of AFCP in various scenarios.
4. The method's steps, including automatic attribute selection and prediction set construction, are well-explained and logically structured.
Weaknesses: 1. While the paper acknowledges the scalability issues associated with current methods for conformal inference with equalized coverage, it does not provide a detailed analysis of the computational complexity of the proposed method. A deeper exploration of scalability, particularly for large-scale datasets, would strengthen the paper.
2. The method's reliance on leave-one-out procedures for attribute selection could be computationally intensive and potentially unstable with small sample sizes. More discussion on the stability and robustness of the attribute selection process, along with empirical evidence, would be beneficial.
3. The literature review, while covering relevant works, could be more exhaustive. Incorporating additional recent studies on robust conformal inference and handling distributional shifts would provide a broader context and highlight the novelty of the proposed approach.
Technical Quality: 3
Clarity: 3
Questions for Authors: What is the computational complexity of the proposed method, and how does it scale with large datasets?
Are there any optimization strategies to improve efficiency?
How stable is the attribute selection process across different sample sizes and datasets?
Can the method handle cases where multiple sensitive attributes need to be considered simultaneously?
Can the method be tested on more diverse and larger real-world datasets to validate its scalability and generalizability?
How does the method perform in scenarios with highly imbalanced datasets?
Have other attribute selection procedures been considered, and how do they compare with the proposed leave-one-out approach?
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your time and effort in reviewing our paper and providing constructive feedback. Please find our responses below.
## Computational Complexity Analysis
As (perhaps too) briefly mentioned in Section 1.4, Appendix A6 provides a detailed computational complexity analysis for implementing our algorithms for either a single test sample or multiple test samples. Is this the information you were looking for? We will make the reference to this analysis clearer in the camera-ready version.
Notably, our AFCP methods can be efficiently implemented for multiple test samples using a shortcut. Specifically, the identification of each subgroup in the calibration set does not rely on the test sample and can be reused to calculate group-wise FPRs and miscoverage rates for different test samples. For instance, this shortcut reduces the computational complexity of the AFCP method from $\mathcal{O}(mn\log n + nKMm)$ to $\mathcal{O}(n\log n + nm + MK(n + m))$ in outlier detection settings, where $n$ is the calibration set size, $m$ is the number of test samples, $K$ is the total number of sensitive attributes, and $M$ is the maximum count of groups across all attributes.
## Impact of Small Sample Sizes
Although any attribute selection procedure may be less stable and effective with small sample sizes, we see this as an inherent challenge of small data sets rather than a flaw of our method. As mentioned in response to another comment, future work could explore the performance of our method in combination with different attribute selection procedures, as it is possible that alternative approaches may perform better in different settings, including with small sample sizes. However, the empirical performance of our AFCP method, as currently implemented, already surpasses that of other benchmark methods applied to the same data, in both large-sample and small-sample settings.
For instance, in the experiments of Figures 3 and 4, it is challenging to accurately identify the attribute associated with significant algorithmic bias when the sample size is as small as 200. However, AFCP manages to select the correct attribute frequently enough to achieve significantly higher conditional coverage compared to the Marginal method, without substantially increasing the average prediction set size. Figure 4 further illustrates this behavior, showing the selection frequency of each attribute. As the sample size increases, our method begins to consistently select the correct attribute in all instances.
These findings align with new experiments based on the COMPAS dataset, summarized in the attached one-page PDF.
We will incorporate these explanations into the revised paper to emphasize the advantages of our methods with small sample sizes. Thank you for your comment!
## Simultaneous Selection of Multiple Attributes
The appendix of our paper describes variations of our AFCP method that are designed to select and protect multiple attributes simultaneously. Specifically, in Appendix A2, we describe AFCP with multiple selected attributes in the multi-class classification setting, and in Appendix A4.3, we detail AFCP with multiple selected attributes in the outlier detection setting. Appendix 7.2.3 presents experimental results when multiple attributes can be selected using the real-world Adult Income Data. We will strive to better highlight these extensions of our method in the revised paper.
## Additional Experiments with Real Datasets
Our proposed method is broadly applicable and can be efficiently applied for a variety of classification and outlier detection tasks. In this paper and appendices, we demonstrated its performance using two synthetic datasets and two real-world datasets. To supplement those results, in response to your feedback and the feedback of other reviewers, we have also conducted new experiments using the widely studied COMPAS dataset. The results obtained with the COMPAS dataset are summarized in the **new Figures 1-3** in the attached one-page PDF document. These additional experiments lead to conclusions that are qualitatively similar to those of the previous experiments. However, since this is a well-studied and interesting data sets, we think it may be valuable to include these new results in the revised paper.
Regarding the applicability of our method to imbalanced data, please note that the new experiment on COMPAS data for recidivism prediction, for example, involve imbalanced data, with roughly 10\% of samples having the response label "High", 20\% having the label "Medium", and 70\% having the label "Low".
## Alternative Attribute Selection Procedures
Thank you for this thoughtful question. While the underlying ideas of our method are indeed flexible enough to incorporate attribute selection procedures that go beyond those considered in this paper—a strength we will emphasize more in the revision—we believe that delving into the subtle empirical trade-offs associated with different selection algorithms is beyond the scope of this paper. We hope that our work will inspire future research to explore this question further.
## Additional References
Following your advice, we will include more references to recent works on robust conformal inference under distribution shift. Although these works address different problems and take different perspectives, they are sufficiently related to merit mention in Section 1.5 (Related Works). The additional page allowed in the camera-ready manuscript will enable us to briefly discuss these references without sacrificing other content. We believe this addition will provide broader context on recent trends and highlight opportunities for integrating ideas in future research. Thank you for this helpful suggestion!
---
Rebuttal Comment 1.1:
Comment: I thank the authors for the detailed response. I maintain my positive score of weak accept. | Summary: The paper focuses on the problem of conformal inference with equalized coverage introduced in [1]. The authors propose a new method, Adaptively Fair Conformal Prediction (AFCP) that (i) adaptively selects a sensitive attribute corresponding to the group most negatively affected by algorithmic bias as evaluated by the miscoverage rate, and (ii) constructs a prediction set that satisfies equalized coverage over groups defined by the selected attribute. The authors perform experiments on synthetic and real data that demonstrate the performance of AFCP relative to methods that guarantee marginal coverage and exhaustive equalized coverage i.e., valid coverage conditional on all sensitive attributes.
[1] Yaniv Romano, Rina Foygel Barber, Chiara Sabatti, and Emmanuel Candès. With malice toward none: Assessing uncertainty via equalized coverage. Harvard Data Science Review, 2020.
Strengths: 1. The problem of conformal inference with equalized coverage is an important problem and of interest to the community.
2. The trade-off between efficiency and equalized coverage addressed in the paper is challenging and of practical significance.
3. Empirical evaluation demonstrates the performance improvement of AFCP over baselines.
Weaknesses: 1. The clarity of the paper can be greatly improved. Detailed comments:
- p2 l37: "A limitation of the current method for conformal inference with equalized coverage...." -- what is the current method?
- p2 l41: what do you mean by efficiency and informativeness here? Is it set size? Please make it clear and precise.
- Section 1.1: Overall, it is hard to understand the motivation from this section. It would be helpful to rewrite this a bit e.g. l32-36: it is not clear how this conveys the rationale for conformal inference with equalized coverage different from rationale for conformal inference more generally.
- The notations are incorrect in some places and unnecessarily complex
- p2 l71-72: $\phi$ is defined as mapping to $\mathbb{N}$ whereas the next line says it results in vector of length $|A|$. I believe it should be defined more generally
- p2 l77-84: The notion of equalized coverage in [17] is defined as: $\mathbb{P}(Y_{n+1} \in C(X_{n+1}, A_{n+1}) | A_{n+1} = a) \geq 1 - \alpha$. It is not clear how this is extended to multiple (possibly overlapping) groups defined by $K$ sensitive attributes using $\phi$
- It is not clear at some places whether $A$ refers to single attribute or set
- Alg 2 l4-5: in line 4, $A$ refers to single attribute, in line 5 it refers to multiple (set of) attributes
- p6 l197: (7) returns a final selected attribute while the sentence refers to subset of attributes
- p6 l204: minor comment, this does not hyperlink to A1
- typos in table captions (Table A25 onwards)
2. Insufficient empirical evaluation: While the paper includes detailed analysis on the two selected datasets, both these setups are fairly synthetic. Also, the Nursery data seems to be from 1997; the paper lacks evaluation on any recent and common benchmark datasets in literature. The paper also seems to lack experiments that demonstrate coverage and efficiency performance in the presence of multiple sensitive attributes.
3. The AFCP1 variation of AFCP seems to outperform AFCP in all cases -- when the sample size is small, we still see undercoverage for the Blue group using AFCP (Fig 3) and AFCP1 is more robust by selecting at least one attribute. While the paper mentions this, what would be the advantage of using AFCP? Is there any procedure to select variations for different data regimes?
Technical Quality: 2
Clarity: 2
Questions for Authors: 1. The main text discusses the AFCP algorithm where at most one sensitive attribute may be selected. If we want equalized coverage over multiple attributes, won’t using AFCP result in similar challenges as exhaustive equalized coverage?
2. How does the size of the restricted calibration sample affect the performance of the algorithm?
Confidence: 3
Soundness: 2
Presentation: 2
Contribution: 3
Limitations: The authors discuss the limitations in the Discussion section.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your detailed review and constructive feedback. We have addressed your questions and concerns point-by-point below.
## Additional experiments
Might you have missed some experiments described in the Appendix and mentioned in Section 3.4? We will highlight these experiments more clearly in the main text. In any case, following your suggestion, we have also conducted new experiments using the Correctional Offender Management Profiling for Alternative Sanctions (COMPAS) dataset.
**Experiments in the Appendix.** In addition to using the two datasets described in the main paper (one synthetic dataset and the nursery data), we had conducted experiments using an additional synthetic dataset (Appendix A7.2.2) and a common benchmark dataset, the Adult Income Dataset (Appendix A7.2.3). The experiments with the income data also include an AFCP extension that allows for the selection of multiple sensitive attributes. This extension is referred to as the "APCF+" method (depicted by the green lines) and is presented in Figures A24–A33. We will better highlight these experiments in the revised paper.
**New experiments with COMPAS data.** The COMPAS dataset is used for multi-class classification tasks, predicting the risk of recidivism across three categories: 'High,' 'Medium,' and 'Low.' As illustrated in the **new Figures 1-2** in the attached PDF document, these additional experiments consistently demonstrate that our AFCP method outperforms other benchmarks. Specifically, it achieves higher conditional coverage than the Marginal method and, on average, produces smaller prediction sets than the Exhaustive and Partial methods. These results are consistent with those presented in the paper and will be includes in the revised paper. Thank you!
## Comparisons between AFCP and AFCP1
Both AFCP and AFCP1 outperform benchmark approaches with small sample sizes, excelling in different areas. AFCP is better suited for scenarios where there is uncertainty about the presence of significant algorithmic bias. On the other hand, AFCP1 is more effective when there is prior knowledge that at least one attribute may be biased. This distinction will be clarified in the revised paper.
In Figure 3, where one group (Color - Blue) is consistently biased, AFCP1 achieves slightly higher conditional coverage than AFCP. While AFCP shows slight undercoverage for the blue group with small sample sizes, it still performs better than the Marginal approach.
AFCP's occasional inability to select a sensitive attribute with small sample sizes reflects the inherent challenges of small datasets rather than a flaw in the method. When the method does not select an attribute, it often indicates a lack of sufficient evidence of algorithmic bias, making it reasonable to calibrate the prediction sets only for marginal coverage.
## Comparison Between AFCP and the Exhaustive Approach
Our AFCP method offers advantages over the Exhaustive approach, especially when the sample size is small and the specific attributes indicating biased groups are not known beforehand.
For instance, in a dataset with five binary attributes where only two are significantly associated with algorithmic bias, an Exhaustive approach would have to consider all five attributes or all combinations of two attributes to ensure valid coverage. This process is inefficient. In contrast, our method can identify the two most relevant attributes in a data-driven way. It then focuses on calibrating the prediction sets within the subgroups formed by these attributes.
## Impact of Calibration Sample Size
We are not completely sure about the meaning of "restricted calibration sample" in your question, but we will interpret it as the number of calibration samples belonging to the biased group identified by the selected attribute. Please correct us if this is not what you meant.
In general, larger sample sizes make it intrinsically easier to identify (and thereby enable mitigating) significant algorithmic biases. Conversely, if the sample size within any group is small, it is more difficult to estimate the coverage within that group. This can result in our method selecting the “wrong” sensitive attribute. However, despite this unavoidable difficulty, the experiments demonstrate that our method performs well relative to other approaches both in small-sample and in large-sample settings.
In particular, Figures 3 and 4 study the performance of our method as a function of the total number of samples in the training and calibration data. From these two plots we can see that when the sample size is as small as 200, it is challenging to select the true attribute because the data is subject to greater variability. Nevertheless, our method is able to select the “correct” attribute often enough to obtain informative prediction sets with significantly higher conditional coverage compared to the marginal approach.
Figure 4 shows that, as the sample size increases, it becomes more likely for our method to select the correct sensitive attribute, as anticipated. The new COMPAS experiments we conducted following your suggestion demonstrate consistent behavior and confirm the practical advantages of our method.
To better investigate the effect of “restricted” sample size for the biased group associated with the attribute selected by our method, we counted the number of data points in the protected group for each experiment and averaged the results over 100 independent experiments. The results are plotted in the **new Figure 3** in the attached PDF document. Please let us know if this answers your question.
## Clarity and Typos
We will fix the typos and make the clarifications you suggested. Thank you!
---
Rebuttal Comment 1.1:
Comment: Thank you for the explanation and clarification. I acknowledge the comparison between methods as I did in my previous comment. I am also not raising questions on the associated guarantees of conformal prediction, but it would be helpful to add accuracy metrics in the future versions to highlight whether this classification setting is challenging to begin with, and how the prediction sets provide more helpful information.
In the light of the current discussion, I am willing to raise my score to 5.
---
Reply to Comment 1.1.1:
Comment: Thank you! We will include additional details on the COMPAS data analysis in the revised version of the paper, in support of the three new figures, including accuracy metrics. Please let us know if there is anything else that we should clarify at this point. (In the previous message we didn't realize there's still time until August 13th to discuss, if needed).
---
Rebuttal 2:
Comment: I thank the authors for the detailed response. I will list some comments and questions I have below:
1. Thank you for the additional experiments. Feel free to correct me if I'm missing something, but it seems the COMPAS dataset has three classes. I am not sure if this is a good setup to study coverage -- e.g. average size is <=2 in Fig 1. How informative would coverage be in this case compared to accuracy? That said, I acknowledge the results.
2. "restricted calibration sample" is referred from the paper (e.g., l203)
---
Rebuttal Comment 2.1:
Comment: Thank you for acknowledging our response. We’re pleased to see that our initial reply addressed your previous questions and concerns.
We also appreciate your follow-up question about the COMPAS data analysis, though it’s unfortunate that it arrived so close to the deadline, particularly since the meaning of this question is not entirely clear to us.
A 3-class classification problem is a reasonable and informative setting for comparing the performance of different conformal prediction methods. In this case, it shows that our method achieves the desired 90% coverage with prediction sets that are relatively small, averaging below 1.5 in size. It's important to emphasize this is an interesting result, which is not trivial at all to achieve in a 3-class context. In any case, what matters the most here is the comparison between the performances of different conformal prediction methods, which clearly shows the advantages of our approach.
In general, an average prediction set size of 1.5 suggests high confidence (prediction sets of size of 1) in the majority of cases, and less informative predictions sets (sizes 2 and 3) only in a minority of cases where the model may be less accurate. Achieving this type of separation between confident and unconfident predictions is precisely what conformal prediction (and, more broadly, uncertainty quantification in machine learning) generally aims to do. | Rebuttal 1:
Rebuttal: We are grateful to the four referees for their detailed reviews and constructive suggestions. We have addressed their questions and concerns point-by-point below. In addition to providing several clarifications, which can be easily reflected in the camera-ready manuscript to enhance its accessibility and completeness, we conducted additional experiments using the COMPAS [1] dataset. The results of these experiments, which align with those already described in the paper, are summarized in the three figures contained within the attached one-page PDF file.
How we have addressed the main points raised by the reviewers:
- **Extended Empirical Evaluation:** We clarified potentially unclear aspects of our empirical evaluations and highlighted additional experiments described in the Appendices that may have been previously overlooked. We also conducted further experiments using the COMPAS dataset, a well-known benchmark for algorithmic fairness research.
- **Clarifications on the Aims and Performance of the Attribute Selection Procedure:** We provided a more detailed discussion of the attribute selection component of our method, including its performance as a function of sample size. This discussion provides additional details, which will be incorporated in the revised manuscript, and explicitly highlights aspects of the existing figures and discussions that might have been previously overlooked.
- **Deeper Comparison Between Variations of Our Method:** We elaborated further on the relative strengths of the two variations of our method, AFCP and AFCP1. Additionally, we highlighted the presence of additional extensions described in the Appendices, designed to select more than one sensitive attribute simultaneously. These extensions may have been previously overlooked by some reviewers.
- **Clarifications on Scalability and Complexity:** We pointed out that the Appendices include a detailed discussion of the computational complexity of our method, which may have been missed by one reviewer.
- **Opportunities for Further Research:** We acknowledged and commented on several intriguing suggestions for further extensions of our method and other opportunities for follow-up research proposed by the referees. We are grateful for these ideas and look forward to including their discussion in the camera-ready version of the paper.
- **Clarifications on Notation:** We gratefully acknowledged the presence of some typos and potentially confusing notations pointed out by one reviewer. These issues will be corrected in the camera-ready version of the paper.
Reference: [1] ProPublica. (2016). Compas recidivism risk score data and analysis. Retrieved from https://github.com/propublica/compas-analysis.
Pdf: /pdf/0e3b64898fc3f1cbc14e87557cbe40eec328eb16.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
D-MiSo: Editing Dynamic 3D Scenes using Multi-Gaussians Soup | Accept (poster) | Summary: This paper proposes the D-Miso pipeline, which includes multi-Gaussian components and two deformation networks to model global transformation and local deformation. A good training strategy is introduced to fit the proposed methods. The rendering quality looks good, and the editing results seem better than those of current motion-based sparse control point deformation using 3D Gaussians.
Strengths: 1. The paper proposes the multi-Gaussian component, allowing large 3D scene elements. It seems like a novel idea.
2. D-Miso uses multi-Gaussian components and two deformation networks for modeling dynamic scenes.
3. The editing results seem better than those of SC-GS.
Weaknesses: 1. Multi-Gaussian representations are similar to SC-GS's control points. Since 3D/4D Gaussians themselves can model large 3D scene elements, this is not a strength. I think the contribution is minor, and it seems that no ablation study is presented in the paper.
2. Most demos are provided in synthetic datasets, which is not convincing. Results on real-world datasets such as [9, 13] are recommended to be included.
3.The "Related Works" section should be reorganized. More discussions on multi-view dynamic scenes and flow-based radiance fields should be included.
4. If your method is based on [3], a brief review should be included in the main text as a preliminary to improve the presentation.
5. More details such as storage costs and training time (for each stage) should be included. Additionally, would the rendering speed be affected by the large deformation network?
6. Minor issues: Line 140: "Guassinas" -> "Gaussians," Line 197: "vertixes" -> "vertices," Fig. 10: "possition" should be "position."
7. What is n_core as mentioned in Eq.1?
Technical Quality: 3
Clarity: 2
Questions for Authors: 1. More comparison and discussions with SC-GS should be included since SC-GS can also support editing and share the similar ideas with control points/Gaussians.
2. As I understand, deformation-based methods find it challenging to synthesize correct novel views with large motion. Could the authors provide comprehensive insight into this issue?
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: Yes, the limiations have been addressed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the remarks and excellent comments and appreciate the time taken to review our manuscript. We have responded to these remarks below.
***W1 Multi-Gaussian representations are similar to SC-GS's control points ...***
Our solution differs in architecture and training strategy from SC-GS, which uses control points while we employ two hierarchies of Gaussian components. **More significantly, we have an extremely large possibility of editing dynamic scenes.** All other Reviewers agree that D-Miso introduces significant novelty, as underscored by Reviewer 2 (“Experimental results show that this method not only matches the rendering quality of SC-GS but **also enables the editing of more extreme large motions**”). The SC-GS authors use control points, which influence a large part of the object. Therefore, when we modify some object element (e.g., the hand of the object, see Fig.4 in our manuscript), the full object changes (in the case of the example from Fig. 4, the head is moving). Thanks to using two levels of Gaussian components, we can modify objects easily without artefacts (see Fig. 4 in the main paper). **Consequently, our new GS representation is better suited for editing dynamic scenes than SC-GS, and it is the paper's main contribution.**
***W2A Most demos are provided in synthetic datasets, which is not convincing. Results on real-world datasets such as [9, 13] are recommended to be included.***
The SC-GS and other baseline models, which modify dynamic scenes, use only synthetic examples. In contrast, **our paper includes examples of full scenes (please see Fig. 3)**, where we modified real-world dynamic scenes. Furthermore, in Appendix C and Fig. 14, we will also provide additional real-scenes modifications. We also add movies (GIFs) from real scene modification to supplementary materials. We agree that such experiments are valuable, and **we will move more examples from the appendix to the main paper.**
***W2B The "Related Works" section should be reorganized. More discussions on multi-view dynamic scenes and flow-based radiance fields should be included.***
Early works on dynamic scenes face difficulties when dealing with monocular settings and uncontrolled or lengthy scenarios. To enhance scene motion modeling, some works in **NeRFs utilize flow-based techniques** [10, g1, g2]. Method [10] extends the static scenario by including time in the domain and explicitly modeling 3D motion as dense scene flow fields. For a given 3D point and time, the model predicts both reflectance and opacity and forward and backward 3D scene flow. In [g1], the model learns scene appearance, density, and motion jointly. NeRFlow internally represents scene appearance and density with a neural radiance field and scene dynamics with a flow field. Finally, [g2] focuses on predicting explicit 3D correspondence between neighboring frames to achieve information aggregation. The correspondence prediction is achieved through the estimation of consistent depth and scene flow information across frames with the use of dedicated networks.
Following suggestions from another Reviewer (**hxyh**; W1), we plan to include **additional works that operate in dynamic scene reconstruction:** [b1, b2]. These are early Gaussian splatting works that significantly contributed to the dynamic scene reconstruction field, and we plan to include them in the revised version of the manuscript. These methods, however, require a multi-view setup similar to those already included [21].
[g1] Yilu et al. 2021, “Neural radiance flow for 4D view synthesis and video processing,” in Proceedings of the IEEE/CVF ICCV
[g2] Zhou et al. 2024, "DynPoint: Dynamic Neural Point For View Synthesis," in NeurIPS 36
***W3 “If your method is based on [3], a brief review should be included in the main text as a preliminary to improve the presentation.”***
Our model is based upon the Gaussian parameterization framework outlined in [3], with a detailed elaboration provided in Appendix A. To enhance readability, this section will be moved to the main text.
***W4 More details such as storage costs and training time (for each stage) should be included. Additionally, would the rendering speed be affected by the large deformation network?”***
Due to the rebuttal length, ablation studies are included in the global rebuttal.
**W6 “What is n_core as mentioned in Eq.1?”**
We have introduced the notation N_Core (Eq. 1) and N_Sub (Eq. 3) to clarify our references to Gaussian distributions for Core-Gaussians versus Sub-Gaussians.
***Q1 “More comparison and discussions with SC-GS should be included since SC-GS can also support editing and share similar ideas with control points/Gaussians.”***
We provide numerical comparisons with SC-GS in the main paper body (Tab 1, Tab 2, Appendix: Table 5 and Table 6 in our draft paper). Fig. 4 presents a visual comparison with the SC-GS model, highlighting the placement of key points in SC-GS. The SC-GS model lacks key points necessary for hand rotation, as demonstrated in our method (third image, second column). Additionally, overextending the SC-GS model results in "tearing" the human's arm, whereas our method handles scaling more effectively. Additionally, we will add more visual comparisons to illustrate our approach's robustness clearly.
***Q2 “As I understand, deformation-based methods find it challenging to synthesize correct novel views with large motion. Could the authors provide comprehensive insight into this issue?“***
Models based on deformable networks, including our model and SC-GS, create representations of key components (such as Core-Gaussians in our case and landmarks points in SC-GS) by positioning them optimally for ease of adjustment at each time instant. However, when there is significant movement in the training set, the network responsible for component localization may face challenges. Such a problem is the same in all models which use deformable networks.
---
Rebuttal Comment 1.1:
Comment: Thanks for your comprehensive answers, most of my questions are solved and would like to change my score to BA.
---
Reply to Comment 1.1.1:
Title: Authors’ response
Comment: We thank the Reviewer for the constructive feedback and appreciate increasing the score. | Summary: This paper is developed based on SC-GS, which enhanced 3DGS with Deformed Control Points to model low-rank dynamics and modify dynamic objects over time. SC-GS necessitates selecting elements that need to be kept fixed and centroids that should be adjusted throughout editing, and it poses additional difficulties regarding re-productivity editing. This paper proposes Dynamic Multi-Gaussian Soup (D-MiSo), which serves as a kind of mesh representation of dynamic GS, links parameterized Gaussian splats, forming a Triangle Soup with the estimated mesh, and separately constructs new trajectories for the 3D objects composing the scene, which makes the scene’s dynamic editable over time or while maintaining partial dynamics.
Strengths: D-MiSo estimates the mesh as a set of disconnected triangle faces in Multi-Gaussians and uses a dynamic function to control the vertices, which makes dynamic 3D-GS easier to modify than SC-GS.
Multi-Gaussians = Core-Gaussians + Sub-Gaussians. Core-Gaussians are an alternative to the control points discussed in SC-GS, with the added advantage of allowing individual modifications. Sub-Gaussians are defined by principal components of core-Gaussian, such that modifying Core-Gaussians will also change Them, which allows scene modifications.
Although the positions of Core-Gaussians are learned by deformation MLP during training, this paper can modify dynamic objects by using the vertex of the Sub-Gaussians or generate mesh from the Core-Gaussians during inference.
D-MiSo can also handle affine transformation and object scaling.
Weaknesses: Lack of training time comparisons.
Lack of comparison against gs2mesh-based methods and cage-based methods.
Technical Quality: 3
Clarity: 2
Questions for Authors: Lack of discussion with respect to the human-NeRF (Neural Body/HumanNeRF/UV-Volumes) and human-GS methods (Animatable Gaussians/D3GA/GoMAvatar).
Confidence: 2
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: The authors adequately addressed the limitation.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the Reviewer for the feedback and for pointing out improvements to our paper. We respond to these concerns in the points below.
***W1: Lack of training time comparisons.***
Here, we showcase a time comparison between SC-GS and D-Miso models, with the former operating noticeably faster than the latter. This can be attributed to two causes. First, unlike SC-GS, D-Miso benefits significantly from a larger batch size during training. Second, D-Miso incorporates two layers of Gaussian components (Core-Gaussians and Sub-Gaussians). On the other hand, SC-GS uses landmark points, making it faster in training but more constrained in editing and processing full scenes. Furthermore, our model obtains better results on real scenes, see Tab. 3 and Tab. 4.
Dataset:| Hook|Jumpin|Trex|Bounce|Hell|Mutant|Standup|
|---|---|---|---|---|---|---|---|
| | | |**D-Miso**| | | | |
Batch:|1|1|1|1|1|1|1|
Time 1st stage [h] | 0:00:53 | 0:00:56 | 0:01:15 | 0:01:21 | 0:00:46 | 0:01:10 | 0:00:56|
Time 2nd stage [h] | 0:59:03|0:53:41|1:13:51|1:23:45|0:24:35|1:09:26|0:41:41|
Batch:|4|4|4|4|4|4|4|
Time 1st stage [h] |00:02:48|00:02:22|00:02:41|00:03:12|00:02:27|00:03:46|00:03:17
Time 2nd stage [h] |1:40:15|1:16:18|2:14:09|1:43:08|1:01:27|1:48:49|1:19:32
Batch:|8|8|8|8|8|8|8|
|Time 1st stage [h] |00:05:30|0:05:17|00:05:38|00:05:16|00:04:28|00:05:51|00:05:56|
|Time 2nd stage [h] |2:27:08|2:03:25|2:42:22|2:40:30|1:43:33|2:18:43|2:12:12|
| | | |**SC-GS**| | | | |
Batch:|1|1|1|1|1|1|1|
|Time|0:24:52|0:21:31|0:30:05|0:30:02|0:17:37|0:27:26|0:20:23|
***W2: Lack of comparison against gs2mesh-based methods and cage-based methods.***
**- GS2Mesh based methods:**
We have already included the following mesh-based Gaussian splatting works, which operate in static setups [3, 26, 27].
To address the Reviewer’s remark, we will also include a more recent work that combines meshes with Gaussians and operates in dynamic scenarios [e1]. This framework reconstructs a high-fidelity and time-consistent mesh from a single monocular video. Building on this representation, DG-Mesh recovers high-quality meshes from the Gaussian points and can track mesh vertices over time, enabling applications such as texture editing on dynamic objects. However, the method relies on a Poisson Solver and differentiable Marching Cubes to recover the deformed surface, significantly complicating and slowing down the pipeline. Moreover, it does not explore geometry modification capabilities, which constitute a significant aspect of our work.
**- Cage based methods:**
We thank the reviewer for suggestions regarding cage-based methods. In response, we will include two recent works that utilize the cage concept. Cage-based methods like [e2, e3] are crucial for the efficient and intuitive manipulation of 3D scenes. By providing a direct deformation approach, these methods simplify a traditionally complex task, making it more accessible. However, they require complex cage-building steps, which complicates the pipeline. These works in their current form do not address training on dynamic scenes so direct comparison to our method is not feasible. Additionally, cage-based methods may lack the flexibility and precision of manual, vertex-level deformation techniques, potentially missing fine details. In contrast, our work emphasizes small and precise element movement, addressing this limitation.
[e1] Liu et al. 2024, “Dynamic Gaussians Mesh: Consistent Mesh Reconstruction from Monocular Videos,” arXiv preprint arXiv:2404.12379. Retrieved from https://arxiv.org/abs/2404.12379
[e2] Huang et al. 2024, “GSDeformer: Direct Cage-based Deformation for 3D Gaussian Splatting,” arXiv preprint arXiv:2405.15491. Retrieved from https://arxiv.org/abs/2405.15491
[e3] Jiang et al. 2024, “VR-GS: A Physical Dynamics-Aware Interactive Gaussian Splatting System in Virtual Reality,” arXiv preprint arXiv:2401.16663
***Q1: Lack of discussion with respect to the human-NeRF (Neural Body/HumanNeRF/UV-Volumes) and human-GS methods (Animatable Gaussians/D3GA/GoMAvatar).***
Several works combine human faces/avatars and meshes, and those based on Gaussian representations are particularly relevant to this discussion. Examples include [f1-f5] - all of them can model dynamic avatars, most based on video data. However, these specialized works rely on human pose/face priors (e.g., FLAME fitting [f6]). We believe it would be valuable to explore the potential of adapting D-MiSo concepts in this context, building on existing works in the field. We envision Stage 1 focusing on effectively integrating such models in motion while the anchoring concept could be adapted to efficiently capture the small details of moving avatars. However, human avatars are a broad topic, with several notable and advanced works emerging each month. A thorough and accurate analysis of related works is required to fully investigate the best utilization and adaptation of our method in this context. We include the following papers in the Related Work and Future Work Sections.
[f1] Li et al. 2024, “Animatable Gaussians: Learning Pose-dependent Gaussian Maps for High-fidelity Human Avatar Modeling”, in Proceedings of the IEEE/CVF CVPR
[f2] Zielonka et al. 2023, “Drivable 3D Gaussian Avatars”, arXiv preprint arXiv:2311.08581.
[f3] Wen et al. 2024, “Gomavatar: Efficient Animatable Human Modeling from Monocular Video Using Gaussians-on-Mesh,” in Proceedings of the IEEE/CVF CVPR
[f4] Qian et al. 2024, "GaussianAvatars: Photorealistic head avatars with rigged 3d Gaussians," in Proceedings of the IEEE/CVF CVPR
[f5] Xiang et al. 2023, "Flashavatar: High-fidelity digital avatar rendering at 300fps." arXiv preprint arXiv:2312.02214
[f6] Li et al. 2017, “Learning a model of facial shape and expression from 4D scans”, in ACM Transactions on Graphics (Proc. SIGGRAPH Asia), 36(6)
---
Rebuttal Comment 1.1:
Comment: Thanks for the rebuttal. It addressed my main concerns. However, I don't think the human-NeRF-based methods (such as Neural Body, HumanNeRF, and UV-Volumes) should be ignored. I believe it is fundamental for this paper to select some of these works and discuss the line of progress.
---
Reply to Comment 1.1.1:
Title: Authors’ response
Comment: We thank the Reviewer for the feedback. To provide a more balanced discussion, aside from already included Gaussian Splatting-based 3D avatars, we will include in our manuscript all the suggested NeRF-based works recognized for their following contributions:
In [h1], authors introduce a novel human body representation, where learned neural representations at different frames share the same set of latent codes anchored to a deformable mesh. This approach allows for the natural integration of observations across frames and provides geometric guidance, leading to more efficient learning of 3D representations. Neural Body effectively addresses the challenge of ill-posed representation learning in scenarios with highly sparse monocular video views. Whereas HumanNeRF [h2] optimizes a volumetric representation of a person in a canonical T-pose, with a motion field that maps the canonical representation to each frame of the video via backward warps. The motion field is decomposed into skeletal rigid and non-rigid motions, handled by deep networks. HumanNeRF demonstrates significant performance improvements over previous methods and produces compelling free-viewpoint renderings from monocular video, even in challenging, uncontrolled capture scenarios. In addition, to address the high computational costs of NeRF rendering, [h3] proposes the UV-Volumes approach, which enables real-time, editable free-view video of human performers. By separating high-frequency appearance details from the 3D volume and encoding them into 2D neural texture stacks (NTS), the UV-Volumes allow for the use of smaller and shallower neural networks to achieve efficient 3D density and texture coordinate estimations while maintaining detailed 2D appearance capture.
If this clarification adequately addresses the concerns raised, we kindly ask the Reviewer to consider raising their scores accordingly.
[h1] Peng et al. 2021, "Neural Body: Implicit Neural Representations with Structured Latent Codes for Novel View Synthesis of Dynamic Humans," Proceedings of the IEEE/CVF CVPR
[h2] Weng et al. 2022, "HumanNeRF: Free-Viewpoint Rendering of Moving People From Monocular Video," Proceedings of the IEEE/CVF CVPR
[h3] UV-Volumes - Chen et al. 2023, "UV Volumes for Real-Time Rendering of Editable Free-View Human Performance," Proceedings of the IEEE/CVF CVPR | Summary: This paper proposes a novel framework for modeling and editing dynamic scenes. The authors used a two-pass Multi-Gaussian approach to represent the entire scene. First, they obtained relatively stable Core Gaussians through initialization, and then used the Core Gaussians to drive Sub-Gaussians to fit the entire scene. To better edit motion, the authors parameterized each Gaussian with triangle soup. Experimental results show that this method not only matches the rendering quality of SC-GS but also enables the editing of more extreme large motions.
Strengths: This paper is clear and easy to follow. The dynamic reconstruction method based on Gaussian splatting inherently has advantages in motion editing, but this direction has not been well-explored by the community. Therefore, I am very grateful to the authors for focusing on improving the motion editing capabilities of dynamic scenes and achieving impressive editing results (as shown in Fig. 4). The triangle-soup-based motion editing proposed by the authors really makes sense. Additionally, the honest comparison of rendering metrics in both synthetic and real scenes is also appreciated.
Weaknesses: 1. I think the following papers should also be cited, because they have made significant contributions to the early dynamic scenes reconstruction based on Gaussian splatting.:
- (CVPR 2024) Spacetime Gaussian Feature Splatting for Real-Time Dynamic View Synthesis by Zhan Li et al.
- (ICLR 2024) Real-time Photorealistic Dynamic Scene Representation and Rendering with 4D Gaussian Splatting by Zeyu Yang et al.
2. L136, The concept of anchor Gaussian originally comes from Scaffold-GS [1], not from Spec-Gaussian. It would be even better if Scaffold-GS could be cited.
3. It would be even better if Deformable-GS [2] were included in the comparison of quantitative metrics (such as Tabs. 1-2), as it is the first deformation-based dynamic Gaussian splatting method and serves as the baseline for SC-GS.
4. I think the ablation study is not thorough enough. Although the ablation on batch size is appreciated, I am unclear about the roles of other components of the method. Corresponding ablation studies are needed to make the paper more robust. For example, I am very interested in understanding the impact of the `Sub-Rot Network` mentioned in L218 on the rendering metrics.
[1] Tao Lu, Mulin Yu, Linning Xu, Yuanbo Xiangli, Limin Wang, Dahua Lin, and Bo Dai. 2023. Scaffold-GS: Structured 3D Gaussians for View-Adaptive Rendering. arXiv preprint arXiv:2312.00109 (2023)
[2] Ziyi Yang, Xinyu Gao, Wen Zhou, Shaohui Jiao, Yuqing Zhang, and Xiaogang Jin. Deformable 3d gaussians for high-fidelity monocular dynamic scene reconstruction. arXiv preprint arXiv:2309.13101,2023.
Technical Quality: 3
Clarity: 3
Questions for Authors: A small tip: In Tab. 1, the authors used the metrics from the SC-GS paper. However, SC-GS did not use a consistent background; for example, the `Bounce` (and maybe `Trex`) scene used a white background. The authors clearly used a black background consistently. Although the reported metrics in the table show a slight decrease compared to SC-GS, I greatly appreciate the authors' honesty.
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Please refer to the weakness part.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the Reviewer for the vote of confidence in our manuscript and for the in-depth feedback and suggestions, which we are happy to incorporate and feel will improve the manuscript. We respond to all the questions and concerns raised in the answers below.
***W1: I think the following papers should also be cited, because they have made significant We contributions to the early dynamic scenes reconstruction based on Gaussian splatting …***
Thank you for suggesting this; we will cite these papers in our manuscript. Method [b1] proposes approximating the spatiotemporal 4D volume of a dynamic scene by optimizing a collection of 4D primitives with explicit geometry and appearance modeling. This method uses 4D Gaussians parameterized by anisotropic ellipses and view-dependent, time-evolved appearances represented by 4D spherical harmonics coefficients. In [b2], a novel scene representation called Spacetime Gaussian Feature Splatting has been introduced. This approach extends 3D Gaussians with temporal opacity and parametric motion/rotation, enabling the capture of static, dynamic, and transient scene content. Additionally, the method incorporates splatted feature rendering to model view- and time-dependent appearances while maintaining a compact representation size. The method is notable for its high rendering quality and speed while also being storage-efficient. We agree that both these works have significantly contributed to the reconstruction of dynamic scenes and align well with the topic under discussion. **Both of them will be included in our manuscript.**
[b1] Li et al., 2024, “Spacetime Gaussian Feature Splatting for Real-Time Dynamic View Synthesis,” Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition
[b2] Yang et al., 2024. “Real-time Photorealistic Dynamic Scene Representation and Rendering with 4D Gaussian Splatting”, Proceedings of the International Conference on Representation Learning
***W2: L136, The concept of anchor Gaussian originally comes from Scaffold-GS [1], not from Spec-Gaussian. It would be even better if Scaffold-GS could be cited.***
Thank you for this excellent comment. In [c1], the authors introduce the concept of anchor points to tackle the problem of overfitted models caused by redundant Gaussians. Scaffold-GS addresses this issue by distributing local 3D Gaussians according to anchor points and predicting attributes based on viewing direction and distance. This approach reduces redundancy, enhances scene coverage, and maintains high-quality rendering with improved robustness to view changes. **Therefore, we will provide an extended description of [c1] in Related Works.**
[c1] Lu et al., “Scaffold-GS: Structured 3D Gaussians for view-adaptive rendering,” Proceedings of the IEEE/CVF, CVPR 2024
***W3: It would be even better if Deformable-GS [2] were included in the comparison of quantitative metrics (such as Tabs. 1-2), as it is the first deformation-based dynamic Gaussian splatting method and serves as the baseline for SC-GS.***
The model Deformable-GS [d1] is indeed a prototype of the SC-GS model. Deformable-GS is one of the first works targeting deformable/dynamic scenes. It introduces an MLP-based deformation network to adapt Gaussian parameters to a given timestep. Below are the PSNR results of our comparison with SC-GS and Deformable-GS, which we will include in our article.
|Dataset:|Hook|Jumpin|Trex|Bounce|Hell|Mutant|Standup|
|---|---|---|---|---|---|---|---|
|Deformable-GS|37.42|37.72|38.10|41.01|41.54|42.63|44.62|
|SC-GS |39.87|41.13|41.24|44.91|42.93 |45.19 |47.89|
|D-MiSo (our) |38.13|42.05|40.88 |41.49 |41.49 |44.38|47.66|
[d1] Yang et al., 2023, “Deformable 3D Gaussians for High-Fidelity Monocular Dynamic Scene Reconstruction”, arXiv preprint arXiv:2309.13101.
***W4 I think the ablation study is not thorough enough. Although the ablation on batch size is appreciated, I am unclear about the roles of other components of the method. Corresponding ablation studies are needed to make the paper more robust. For example, I am very interested in understanding the impact of the Sub-Rot Network mentioned in L218 on the rendering metrics.***
The Sub-Rot Network, despite being a simple single-layer network, plays a crucial role in the 2nd stage. We appreciate it being highlighted for further analysis, as this allows us to demonstrate its significance. Below is a numerical comparison of PSNR for models with and without the Sub-Rot Network. Furthermore, we would like to emphasize that due to its shallow architecture, incorporating the Sub-Rot Network does not significantly impact training time. We decided to present the results on the *jumpingjack* dataset distinguishing between batch = {4, 8}. The experiments we performed using the RTX4090 GPU.
| | batch = 4 |batch = 4 |batch = 8 | batch = 8 |
|---|---|---|---|---|
| Sub-Rot Network | with |without | with | without |
|PSNR | 40.42 | 39.98 | 41.65 | 41.27|
|Training time [h] | 1:18|1:12|2:08|1:45|
|Rendering time [fps] |175 |186|190|259|
Since the Sub-Rot Network used is not deep and has only a single layer, it does not visibly affect the training time. Additionally, Reviewer 1 (u5NP) pointed us to consider the role of a number of Sub-Gaussian/Core-Gaussian. We agree that this is an interesting analysis, and we will incorporate it in the revised manuscripts.
***Q1: A small tip: In Tab. 1, the authors used the metrics from the SC-GS paper. However, SC-GS did not use a consistent background; for example, the Bounce (and maybe Trex) scene used a white background. The authors clearly used a black background consistently. Although the reported metrics in the table show a slight decrease compared to SC-GS, I greatly appreciate the authors' honesty.***
Thank you for raising this point. We believe it is important to use a unified framework for evaluation. Therefore, following most papers, we introduced results obtained on a black background.
---
Rebuttal Comment 1.1:
Title: Keep positive
Comment: Thanks to the authors for their response. I am satisfied with the rebuttal and will maintain my positive evaluation. There are a few points to note for the release version:
- The resolution of Deformable-GS is 800x800, while SC-GS and D-MiSo are 400x400. I don't think it's necessary to align them completely, but it would be better to clarify this.
- For SC-GS FPS measurement, KNN should be fixed, and there is no need to query for every iteration.
- I suggest that the authors discuss the differences and connections between D-MiSo and GaMeS.
---
Reply to Comment 1.1.1:
Title: Authors’ response
Comment: We thank the Reviewer for the valuable comments and are keen to follow up on the provided suggestions:
- We will add information on image resolution and include a table with 800x800 resolution in the Appendix.
- We agree that KNN can be fixed and will include a comment to that effect in the paper.
- We will discuss the differences and connections between D-MiSo and GaMeS in the main paper. | Summary: This paper introduces a Dynamic Gaussian Splatting representation that allows for easier object shape editing at test time. This is achieved through the use of Dynamic Multi-Gaussian Soup (D-MiSo), a mesh-inspired multi-gaussian system. Specifically, a GS is divided into two components: Core-Gaussian, which models the transformation of groups of sub-gaussians, and sub-Gaussian, which handles the final geometry and color. This enables different levels of editing by either changing the core-Gaussian or sub-Gaussians for more fine-grained control.
Strengths: 1) The paper demonstrates substantial improvement in the level of control and editability of Gaussian Splatting (GS) at test time, while maintaining competitive PSNR/SSIM scores.
2) I appreciate the inclusion of mesh visualization estimates (Figures 9 and 11), which make the method more convincing.
3) The method is tested on various datasets with varying complexity and shows good performance.
Weaknesses: I find the paper's writing style difficult to follow and some sentences do not even make sense. Additionally, there are numerous components involved in the pipeline that are challenging to keep track of: multi-Gaussian, core-Gaussian, sub-Gaussian, core-triangle soup, multi-triangle soup, and sub-Gaussian soup (which may be the same thing?). I would appreciate a revision in writing to improve clarity.
Technical Quality: 3
Clarity: 2
Questions for Authors: 1) Is there a strategy for selecting hyperparameters such as the number of Gaussian and Sub-Gaussian components?
2) It is unclear why the proposed method would perform worse than SC-GS on metrics like PSNR/SSIM for NeRF-DS and D-NeRF. Does this suggest that increased editability comes at the cost of model performance, or is it simply a matter of hyperparameter tuning?
Confidence: 2
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: The authors have discussed the limitations of their method, but I was unable to locate a statement regarding the broader impacts of the proposed method in the Limitations section, as claimed by the author.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the Reviewer for the constructive remarks regarding our manuscript, to which we responded next to the posted questions below. We will revise the manuscript in accordance with the raised concerns.
***W1 “I find the paper's writing style difficult to follow … I would appreciate a revision in writing to improve clarity.”***
Thank you for this comment. We appreciate the complex nature of our manuscript, and we will edit all the relevant parts for clarity. In the paper, we introduced a Multi-Gaussian component, which is comprised of a Core-Gaussian "parent" and Sub-Gaussian "child," each parameterized by a triangle. Accordingly, the Core-Triangle corresponds to the Core-Gaussian, and the Sub-Triangle corresponds to the Sub-Gaussian. The terms "Triangle" or "Gaussian" are used contextually:
- When discussing rendering or properties such as color and transparency, we use "Gaussian."
- When referring to parameterized Gaussians and their modifications (e.g., location, scale, rotation), we use "Triangle."
Fig. 3 illustrates these differences. **We will explain all these objects and clarify them in the Introduction.**
***Q1: Is there a strategy for selecting hyperparameters such as the number of Gaussian and Sub-Gaussian components?***
Our model operates on several hyperparameters, primarily derived from the basic Gaussian Splatting (GS) framework. Given the introduction of additional stages and the multi-Gaussian component, one of the new hyperparameters is the number of Core-Gaussians and Sub-Gaussians. We are pleased to see this novel aspect being acknowledged. The number of Core-Gaussians is determined automatically in stage 1, utilizing the pruning mechanism implemented in GS.
The Table below illustrates the training time corresponding to these parameters using the *jumpingjacks* dataset as an example. During the experiments, we used the RTX4090 GPU and densification until 5000 iterations (1st stage). We can see the speed of the 1st stage, which is used for Core-Gaussians preparation. Moreover, too few Core-Gaussians can cause a drop in quality (PSNR metric). The sub-rot network is shallow enough (it is a single layer in all of our experiments) that the increased cost of training time is not apparent. Therefore, the experiments suggest the number of Core-Gaussians is sufficient after the first phase. In all cases, a larger number of Sub-Gaussians improved the quality of renders.
|n_sub|1|10|25|1|10|25|1|10|25|
|---|---|---|---|---|---|---|---|---|---|
|Iteration - start of deform network|2000|2000|2000|3000|3000|3000|5000|5000|5000|
|N_core after 1st stage|3290|3340|3434|2841|2974|2936|1803|1798|1807|
|PSNR|38.94|41.35|41.65|38.52|41.14|41.41|33.63|37.14|37.47|
|Train time 1st stage [h]|00:05:14|00:04:48|00:05:17|00:04:17|00:04:33|00:04:16|00:03:27|00:03:02|00:03:21|
|Train time 2nd stage [h]|1:51:17|1:44:21|2:03:25|1:39:39|1:53:18|1:50:25|1:48:54|1:41:12|1:56:28|
|Sum: Train time [h]|1:56:31|1:49:09|2:08:42|1:43:56|1:57:51|1:54:41|1:52:21|1:44:14|1:59:49|
***Q2: It is unclear why the proposed method would perform worse than SC-GS on metrics like PSNR/SSIM for NeRF-DS and D-NeRF. Does this suggest that increased editability comes at the cost of model performance, or is it simply a matter of hyperparameter tuning?***
Our main contribution is the ability to edit a dynamic scene better and more efficiently. As pointed out, there is generally a trade-off between editing and the quality of the reconstruction. However, **in our model, such effects are minimal**. Consequently, we obtained markedly worse results on synthetic datasets (Tab 1 in our draft paper) **and better results achieved on full scenes** (Tab. 3 and Tab. 4 in our draft paper). We tentatively believe this is caused by the suboptimal algorithm used to choose core points in SC-GS. SC-GS uses a heuristic approach based on distances to select core points, which might be problematic on a large scale. Our solution uses classical GS optimization procedures to establish the Gaussian number and size. Therefore, D-Miso covers 3D scenes better.
***L1: The authors have discussed the limitations of their method, but I was unable to locate a statement regarding the broader impacts of the proposed method in the Limitations section, as claimed by the author.***
Our model significantly improves rendering quality and advances 3D scene reconstruction and rendering, impacting multiple domains by enabling more realistic and efficient 3D modeling and animation. This technology could enhance VR/AR experiences [a1], robotics [a2], and medical imaging [a3]. It could also be used for interactive education [a3], scientific visualization, and a plethora of other commercial applications like product design and real estate [a4]. We will edit our manuscript to include these considerations and potential application fields of our method.
[a1] Linning et al. 2023, “VR-NeRF: High-Fidelity Virtualized Walkable Spaces”, SIGGRAPH Asia 2023 Conference Papers
[a2] Lisus et al., 2023, "Towards Open World NeRF-Based SLAM," 2023 20th Conference on Robots and Vision (CRV), Montreal, Canada, pp. 37-44, doi: 10.1109/CRV60082.2023.00013
[a3] Huang et al. 2013, “Exploring Learner Acceptance of the Use of Virtual Reality in Medical Education: A Case Study of Desktop and Projection-Based Display Systems,” Interactive Learning Environments 24 (1): 3–19.
[a4] Li et al. 2022, “Deep Learning of Cross-Modal Tasks for Conceptual Design of Engineered Products: A Review”, ASME 2022 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference
---
Rebuttal Comment 1.1:
Comment: Thank you for your comprehensive answer. I am still curious why your method peforms worse on synthetic data which should be easier?
---
Reply to Comment 1.1.1:
Title: Authors’ response
Comment: It is a non-trivial question requiring substantial consideration; thank you for posting it.
- **D-MiSo vs SC-GS performance on synthetic data:**
The synthetic dataset from the paper describing D-NeRF [11] is the first and most explored dataset of dynamic 3D objects. **Modern methods on the D-NeRF dataset produce similar results with small differences caused mainly by parameter optimization.** Results on the D-NeRF dataset are proof that our model has attained the presently possible optimal level of rendering quality (Reviewer 2: “Experimental results show that this method not only matches the rendering quality of SC-GS but also enables the editing of more extreme large motions.”). **Consequently, our model's main contribution is better editing quality.**
- **D-MiSo vs SC-GS performance on real data:**
In the SC-GS paper, the authors discuss the use of control points. Each control point influences the closest Gaussian components. The number of control points is a crucial hyperparameter. As detailed in Section S1.1 of the SC-GS supplementary material, determining the **optimal** number of control points is essential for achieving accurate reconstructions. However, following the SC-GS supplementary material, we think that in real scenes with complex backgrounds, identifying the appropriate number of control points may be challenging. Increasing the number of control points does not necessarily lead to better performance due to optimization challenges.
In contrast, our D-MiSo model automatically covers space, and no additional hyperparameters are introduced. The number of Core Gaussians is automatically found by the classical Gaussian Splatting optimization procedure in the first part of optimization. Later, each scene part can be modified independently. As a result, our model not only achieves high-quality reconstructions but also facilitates more effective editing, thus balancing reconstruction quality with editing capability. In essence, our solution does not use additional parameters, and our model builds a natural division of objects by the Core Gaussian component. It also allows for more straightforward modification. **Consequently, our model's main contribution is better editing quality.**
[11] Pumarola, Albert, et al. "D-NeRF: Neural radiance fields for dynamic scenes." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2021. | Rebuttal 1:
Rebuttal: We thank the Reviewers for their excellent comments and constructive remarks regarding our paper, as well as for their positive feedback. We are also thankful for noticing the main contribution of our model, which is “enabling the editing of more extreme large motions” that “matches the rendering quality of SC-GS”, as underlined by Reviewer 2. The main two strands of the received feedback concern the need for further ablation studies and providing a more comprehensive literature review. We feel that these changes will enhance our paper, and we will incorporate them into our manuscript.
**Ablation studies and hyperparameter analysis**
Most of the comments received concerned the need for an ablation study. First, we contrasted our model with and without the Sub-Rotation Network, as shown in the Table below, utilizing the commonly used *jumpingjacks* dataset. **These results show that Sub-Rot Network is crucial to obtaining SOTA PSNR. With Sub-Rot Network, we obtain approximately 0.5 PSNR crucial to obtain the reconstruction of small elements** (like human fingers in the *jumpingjacks* dataset).
| | batch = 4 |batch = 4 |batch = 8 | batch = 8 |
|---|---|---|---|---|
| Sub-Rot Network | with |without | with | without |
|PSNR | 40.42 | 39.98 | 41.65 | 41.27|
|Training time [h] | 1:18|1:12|2:08|1:45|
|Rendering time [fps] |175 |186|190|259|
We also present how the size of the network influences training time and render speed in the next Table utilizing the commonly used *jumpingjacks* dataset. **These results show that deformable networks and Sub-Rot Networks are not costly in terms of rendering time, resulting in real-time rendering.**
In practice, batch size has a higher impact than deformation network depth.
|Deformation network depth (numbers of layers)|4|6|8|10|
|---|---|---|---|---|
Batch:|4|4|4|4|
|PSNR|40.91|40.75|40.42|40.26|
|Training time [h]|1:21|1:22|1:18|1:31|
|Render time [fps]|138|167|175|144|
Batch:|8|8|8|8|
PSNR|41.86|42.01|41.65|41.37|
Training time[h]|2:00|1:58|2:08|2:07|
Render speed(fps)|227|221|190|192|
The following Table presents the training time and storage cost, as well as the FPS for rendering for each dataset. **These results suggest that our model is memory efficient and that the time required to train it is minimal.** We performed the experiments for batches 4 and 8 to show the effect of batch size. In all cases, storage costs decreased, and performance improved with a trade-off of increased training time.
Dataset:| Hook|Jumpin|Trex|Bounce|Hell|Mutant|Standup|
|---|---|---|---|---|---|---|---|
Batch:|4|4|4|4|4|4|4|
PSNR|37.77|40.42|39.56|40.63|41.44|43.38|46.07
Time 1st stage|00:02:48|00:02:22|00:02:41|00:03:12|00:02:27|00:03:46|00:03:17
Time 2nd stage |1:40:15|1:16:18|2:14:09|1:43:08|1:01:27|1:48:49|1:19:32
Storage cost|76MB|53.5MB|122MB|131MB|27MB|73MB|43MB
Rendering time (fps)|138|175|90| 123| 185 | 138 | 169
Batch:|8|8|8|8|8|8|8
|PSNR|38.07|41.65|40.74|40.55|41.59|44.40|47.22|
|Time 1st stage [h] |00:05:30|0:05:17|00:05:38|00:05:16|00:04:28|00:05:51|00:05:56|
|Time 2nd stage [h] |2:27:08|2:03:25|2:42:22|2:40:30|1:43:33|2:18:43|2:12:12|
|Storage cost|32MB|24MB|51MB|80MB|16MB|28MB|20MB|
|Rendering time(fps)|192|190|143|153|205|188|194
**Related Works**
Another repeated feedback concerned extending the body of referenced prior works. In line with this feedback, we will reorganize and extend the literature review to include the suggested related works and make it more complete. We are also keen to acknowledge that these additional papers present lower scores than SG-CS and D-Miso. Moreover, we reckon that our D-Miso model has a stronger ability to edit dynamic scenes. | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Efficiently Parameterized Neural Metriplectic Systems | Reject | Summary: This paper proposes a new parameterization for neural metriplectic systems which explicitly incorporates structural information about the degeneracy conditions $\{ S, \cdot \} = 0, [E,\cdot ] = 0$ into the model. The model requires $\sim O(n^2)$ learnable parameters for a problem with $n$ state variables instead of some prior methods which need $O(n^3)$. Further, it also encodes this degeneracy condition in a hard constraint, leading to models which will by construction respect these desired physical conditions. The authors provide a deep learning implementation scheme for their method which involves learning $E(x), S(x)$ and using $\nabla E, \nabla S$ to construct the matrices $L, M$ needed for the bracket from observed trajectories of the physical system. The gradients $\nabla E, \nabla S$ needed for the brackets are computed with autodifferentiation. The authors show that this system is trained end-to-end on simple physical systems including a two-gas system and a thermo-elastic pendulum and can outperform existing methods on these benchmarks.
Strengths: This paper motivates the need for metriplectic systems which can be efficiently implemented and that incorporate the dynamical constraints required for these systems (energy conservation and entropy production). This captures a potentially interesting class of physical systems that could be modeled by machine learning methods like those employed in the present work. The method that they present as Algorithm 1 is straightforward and improves upon the cubic time complexity of GNODE or GFINN. The authors also provide an approximation result for their algorithm and support their claims with some experiments.
Weaknesses: While the authors improve the scaling from cubic to quadratic in the number of state variables, the total complexity (quadratic) still scales poorly with size of the problem (number of state variables / dimensions). Further, the current experiments and comparisons were performed on small benchmarks. However, since this paper is the first to point out that the cubic scaling can be improved by reflecting constraints due to degeneracy, I think the experimental component of the contribution is not the most important.
Technical Quality: 3
Clarity: 2
Questions for Authors: Lemma 3.2 involves projection matrices to construct the $L,M$ matrices. How are these chosen in the experiments?
I am not an expert in this area of ML for physical modeling. Could the authors provide some insight into the ultimate goal system they would eventually wish to model with metriplectic neural systems? Are there real world physical data which could be well captured by these kinds of models?
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: The authors do mention the primary limitations of this present work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your thoughtful review. Below you will find responses to your comments and questions. Please let us know if we can do anything further to aid you in your final decision.
**Weaknesses**
*While the authors improve the scaling from cubic to quadratic in the number of state variables, the total complexity (quadratic) still scales poorly with size of the problem (number of state variables / dimensions). Further, the current experiments and comparisons were performed on small benchmarks. However, since this paper is the first to point out that the cubic scaling can be improved by reflecting constraints due to degeneracy, I think the experimental component of the contribution is not the most important.*
We acknowledge that the experimental component of the paper is limited to small-scale problems, although it is true that the main contribution of our work is not experimental (c.f. the contributions paragraph at the end of Section 1). We also agree that quadratic complexity in the state dimension $n$ is somewhat disappointing, but this is the minimum necessary for expressing an *arbitrary* nondegenerate metriplectic system, which can be seen based on the fact that $L$ depends on an arbitrary skew-symmetric matrix $A\in\mathbb{R}^{n\times n}$. Future work will have to sacrifice some degree of generality to obtain a better scaling rate.
**Questions**
*Lemma 3.2 involves projection matrices to construct the $L,M$ matrices. How are these chosen in the experiments?*
The projection matrices here are determined canonically from the gradients of the learnable energy and entropy $E,S$, and are never formed in practice (although it would be easy to do so). Instead, the products $L\nabla E$ and $M\nabla S$ are formed directly following Lemma 3.2 and the rules of the exterior algebra. This ensures that the necessary computations are done as compactly and efficiently as possible.
*I am not an expert in this area of ML for physical modeling. Could the authors provide some insight into the ultimate goal system they would eventually wish to model with metriplectic neural systems? Are there real world physical data which could be well captured by these kinds of models?*
The metriplectic formalism is especially useful for capturing dissipative perturbations of conservative systems. An example are the Navier-Stokes equations thought of as a perturbation away from the Stokes equations. Particularly, metriplecticity gives a useful mechanism for generating interpretable, energy-conserving and entropy-generating machine-learned models given only observed (or simulated) data. For example, with more assumptions on the system in question (to enable better scaling), it may eventually be possible to reliably learn the metriplectic evolution of a climate system given observations at various locations around the world. This would enable a cheap and useful surrogate for such systems which behaves in a thermodynamically realistic manner, potentially enabling better predictive capabilities.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their rebuttal and response to my questions. I will maintain my current score. | Summary: This paper proposes a parameter efficient parameterization for neural networks simulating metripletic systems, which are differential equations that have both an energy dissipative piece and an energy conservative piece. The method works by learning several of the required quantities (L and M, which trade off dissipation and conservation, I believe), while also using a small neural network to estimate the dissipation and conservation pieces (E(x) and S(x) ). As not all quantities in the state, x, can be observed, they use a time based diffusion model to emulate the hidden states (e.g. entropy) to develop initial conditions for these. Experiments are performed on two systems of this class, where it seems like the method performs better (probably due to having better inductive biases).
Unfortunately, due to not having a strong physics background, I feel somewhat unqualified to judge many of the technical strengths although things seem reasonable from a skim. I don’t know if I can properly assess novelty and significance as a result.
Strengths: Significance:
- Building better emulators of physical systems that are complicated is a good first step in what the authors term “phenomenonological” understanding of these systems.
- Even quicker training time (and demonstrated both practically and theoretically) is quite helpful. I remember one of the original issues with NODE was that it took a very long time to converge.
Clarity:
- Overall, the paper is pretty well written, even if quite dense, and okay to follow for a non expert physicist. I was able to follow at least the ML pieces and the experiments section quite well.
- The relevant literature is reasonably well signposted; I learned a fair bit about the state of this field by checking the references.
Novelty:
- The approach seems to have a clear inductive bias win over the prior works GFINN and GNODE due to better parameterization of the system.
Weaknesses: Unfortunately, the writing ends up being quite dense and technical with minimal outside applications.
Sure, emulating these physical systems in the experiments is quite nice, but what types of applications does this lead to? This is more of a writing based thing and the paper could be refactored around one of these applications if possible.
Technical Quality: 3
Clarity: 3
Questions for Authors: **Questions**
L138: Why do GFINNs require cubic scaling if they only need to learn O(rn^2) functions?
In general, what is the lower theoretical bound for learning metripletic systems?
L363: what are relevant 3-d problems of the size that you’d like to tackle?
Fig 3b: is model parameters measured in thousands here? Don’t think we can have 4.3 parameters…
Could the authors generate plots like Fig 2 for the other two experiments? I think it may be useful to understand why the double pendulum example shows less relative improvement in that system?
Are the two main text experiments fully observed? Could the authors run these without full observations (e.g. with the diffusion for the unobserved states) if that makes sense in this setting?
L226: why tanh activation for this task?
**Writing comments:**
- I would personally suggest moving a bit more of appendix E into the main text, as well as Fig 3. These are both interesting and pretty compelling.
- I would also suggest then moving section 3.1 into the appendix as I don’t think you really need much of that material. Pieces of Section 2 are much too dense and could be moved into the appendix as well; it should probably be half a page at most.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: n/a
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your thoughtful review. We are glad to hear that you found our paper well-written and could use it to study the current literature in this interesting area of ML.
Below you will find responses to your comments and questions. Please let us know if we can do anything further to aid you in your final decision.
**Weaknesses**
*Unfortunately, the writing ends up being quite dense and technical with minimal outside applications.*
*Sure, emulating these physical systems in the experiments is quite nice, but what types of applications does this lead to? This is more of a writing based thing and the paper could be refactored around one of these applications if possible.*
The primary application of this work lies in producing more interpretable and physically realistic models from data. Because the models produced by our NMS method are provably metriplectic, the conservative and dissipative contributions to the learned dynamics can be clearly identified, the dynamical evolution is (provably) stable, and the generalization error in the model is (provably) bounded. This contributes to the utility of the metriplectic formalism in phenomenological modeling, since any observed or simulated data can be fit to a metriplectic system in a way that "energy" is conserved and dissipative effects are captured through the generation of "entropy". This allows for more physically realistic surrogate models which are seen to train easier and generalize better than structure-uninformed approaches like NODE.
**Questions**
*L138: Why do GFINNs require cubic scaling if they only need to learn O(rn^2) functions?*
This is because the $r$ could be almost as large as $n$. More precisely, we do not know the rank $r$ of the dissipation matrix $M$ ahead of time, except that $1\leq r\leq n-1$. So, the scaling is potentially cubic in $n$.
*In general, what is the lower theoretical bound for learning metripletic systems?*
As illustrated by our parameterizations in Lemma 3.2, the lower bound on learnable functions for a *general* nondegenerate metriplectic system is quadratic in both $n$ and $r$. This is because $L$ depends on a general skew-symmetric matrix field $A$ described by $n(n-1)/2$ functions and similarly $M$ depends on a symmetric Cholesky factor containing $r(r+1)/2$ functions. To do better than this, we must give up on full generality and assume more knowledge about the metriplectic system in question, which is an interesting avenue for future work.
*L363: what are relevant 3-d problems of the size that you’d like to tackle?*
It would be highly interesting to apply an NMS-like approach to large-scale fluid problems such as the Navier-Stokes equations, or to tackle observational data about the climate or atmosphere. On the other hand, the method in this paper cannot apply directly to these scenarios due to its quadratic scaling in the state dimension $n$, and so more than just metriplectic structure will have to be assumed in order to improve this rate.
*Fig 3b: is model parameters measured in thousands here? Don’t think we can have 4.3 parameters…*
Yes, thanks for the catch. We have corrected this in the revised manuscript.
*Could the authors generate plots like Fig 2 for the other two experiments? I think it may be useful to understand why the double pendulum example shows less relative improvement in that system?*
We have done this, and included the relevant plots in the Appendix to the revised manuscript. We observe that the double pendulum is more difficult to train for all methods due to its inherent complexity, which is likely why there is less relative improvement in this case. Please refer to the PDF file in the global response.
*Are the two main text experiments fully observed? Could the authors run these without full observations (e.g. with the diffusion for the unobserved states) if that makes sense in this setting?*
Good question. The main experiments are *not* fully observed, which in particular makes them "harder": the network has to figure out the evolution of entropy density on its own. Assuming access to the full state data improves results substantially, as can be seen in Table 3 in Appendix C.
*L226: why tanh activation for this task?*
We choose tanh because of its smoothness. Our theoretical results assume a sufficient degree of regularity in $L,M,E,S$ (at least $C^1$ in most cases), and tanh ensures that this holds in our architectures. On the other hand, many different activations could be used instead, and it is possible (maybe even likely) that infinite regularity is not the best option in all cases.
**Writing comments**
- *I would personally suggest moving a bit more of appendix E into the main text, as well as Fig 3. These are both interesting and pretty compelling.*
- *I would also suggest then moving section 3.1 into the appendix as I don’t think you really need much of that material. Pieces of Section 2 are much too dense and could be moved into the appendix as well; it should probably be half a page at most.*
Thank you for the suggestions. In view of your comments, in the revised version we have moved some of the details in Section 2 into the Appendix in order to move some more of Appendix E (plus Fig 3) into the main text. We have elected to keep Section 3.1 as it is, since this material is essential for understanding the NMS method.
---
Rebuttal 2:
Comment: Thanks for responding to my questions. I now have a bit better understanding of this paper, and still think it should be accepted.
> L138
Thanks, that makes sense, although I believe it could reduce the computational gains compared to the other methods.
Thank you for also attaching the figures for double pendulum. I see that exploiting the structure as your approach does tends to improve the qualitative performance as well. | Summary: This work presents a method for learning metriplectic systems from data. Metriplectic systems are a model which conserve energy and produce entropy, two desirable features. Their method, termed “neural metriplectic systems” (NMS), is based on a more efficient system parametrization. The authors also prove universal approximation results on non-degenerate systems, and generalization error bounds. They verify that their method outperforms other metriplectic-learning baselines, GNODE and GFINN, on two physical experiments.
Strengths: Originality: Although I am not at all an expert in this field and therefore cannot properly judge, it seems that the main theorem (Theorem 3.4) is novel and non-trivial.
Quality: The proposed method outperforms baselines in two experimental settings, verifying the expected gain from having a more efficient parametrization. The corresponding theoretical results, on universality and generalization, provide a fairly thorough picture of the method.
Significance: Within the field of learning metriplectic systems, this paper seems to make a valuable contribution and improve on prior work.
Clarity: The paper is very well-written, and mathematically rigorous. Although the details are not accessible to someone without a background that matches the subject material rather closely, the high-level ideas about the benefits of metriplectic systems, what past work has done, and the advantages of their new method, are conveyed well.
Weaknesses: Clarity:
1. Although well-written, the paper is not accessible to most machine learning audiences, and seemingly requires the reader to already have a physics background in phenomenological modeling, or exterior algebra.
2. Mathematical terms such as algebraic brackets, Poisson brackets, degenerate metric brackets, etc. should be defined in the beginning of the paper, or with a reference to a textbook or other paper defining them. The “exterior algebra” background is suitable for only those with a strong mathematical background already, using terms like “wedge product” and “graded algebra” without definition. (Admittedly, it would be impossible to fully explain all of these concepts in only 9 pages — perhaps a citation to a textbook would be helpful here, but in practice if the reader needs to understand the decomposition result properly to grasp the contribution, then this work may be more suitable for a venue other than a machine learning conference.)
Quality: The baselines in experiments, as well as the methods discussed in the exposition, are all metriplectic. However, it seems like other methods (e.g. which preserve energy but do not increase entropy, or even those which are not physics-informed at all), should be included too.
Significance: I am not sure how widely applicable metriplectic learning systems are, or what alternative (non-metriplectic) methods can be used for the same problems. The paper would be improved by providing more of this background/motivation.
Overall, as a non-expert, my main concern is with the suitability of this work for a machine learning conference - I defer to the AC on this point. It seems that the machine learning techniques used within NMS are fairly straightforward, while it is the parametrization in Theorem 3.4 that seemingly constitutes the crux of the method. However, the statement and proof of Theorem 3.4 would be more accessible to a physics or math audience, than an ML audience.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. NMS is a more efficient parametrization than GNODE and GFINN. Is it also more efficient computationally (end-to-end)? (For example, one could in theory imagine a system with very few parameters, that uses them in a forward pass in a computationally intense way.)
2. Are there non-metriplectic methods that are suitable for comparison in the experiments (for example, methods that preserve energy like Hamiltonian networks)?
3. What is the motivation for metriplectic methods overall — do they have drawbacks relative to methods that do not preserve energy/increase entropy? It would be helpful to include these in related work.
Confidence: 1
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: Limitations are discussed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your thoughtful review. We are glad to hear that you found our paper well-written and reflective of a valuable contribution.
Below you will find responses to your comments/questions, including your main concern that the work may not be well-suited for a machine learning audience. For space requirements, we have occasionally truncated your original (italicized) comments with "...". Please let us know if we can do anything further to aid you in your final decision.
*Overall, as a non-expert, my main concern is with the suitability of this work for a machine learning conference - I defer to the AC on this point. It seems that the machine learning techniques used within NMS are fairly straightforward, while it is the parametrization in Theorem 3.4 that seemingly constitutes the crux of the method. However, the statement and proof of Theorem 3.4 would be more accessible to a physics or math audience, than an ML audience.*
It is true that the parameterizations in Theorem 3.4 and the associated theoretical results (Proposition 3.7 and Theorem 3.9) are a primary contribution of this work. However, NMS is fundamentally a machine learning method: the goal is to learn a nondegenerate metriplectic system from data. We think that the use of mathematically rigorous machinery in pursuit of this goal enhances the value of the work and does not disqualify us from high-quality machine learning venues such as NeurIPS. Indeed, the presented mathematics have allowed us to decouple metriplectic structure-preservation from any optimization error incurred during training, leading to a provably structure-preserving model while simultaneously guaranteeing universal approximation and meaningful estimates of the error in the trajectories. Note that several related works have been published in NeurIPS. For example:
- Greydanus et al. Hamiltonian neural networks. NeurIPS 2019
- Finzi et al. Simplifying Hamiltonian and Lagrangian Neural Networks via Explicit Constraints. NeurIPS 2020
- Chen et al. Neural symplectic form: Learning Hamiltonian equations on general coordinate systems. NeurIPS 2021
- Lee et al. Machine learning structure preserving brackets for forecasting irreversible processes. NeurIPS 2021
- Gruber et al. Reversible and irreversible bracket-based dynamics for deep graph neural networks. NeurIPS 2023
*Quality: ...it seems like other methods (e.g. which preserve energy but do not increase entropy, or even those which are not physics-informed at all), should be included too.*
We agree that additional experiments are always helpful. Note that the case of networks which are not physics-informed has already been handled by the NODE architecture (see Tables in Section 5), and it is clear that this approach is not as performant as NMS. The case of purely energy-conserving networks (e.g., Hamiltonian NNs) is being included in the revision, and is also notably less performant than NMS. This is expected, since the underlying dynamics are not Hamiltonian.
*Significance: I am not sure how widely applicable metriplectic learning systems are...*
The metriplectic formalism has been widely adopted in nonequilibrium thermodynamics, and a large variety of interesting physical systems can be understood through this lens. Besides the references mentioned in the Introduction, a representative monograph with many examples is [1]. While we are limited by space requirements, we have added a bit more background information to the revised Section 1.
[1] Öttinger, Hans Christian. Beyond equilibrium thermodynamics. Vol. 5. Hoboken: Wiley-Interscience, 2005.
**Weaknesses**
*1. Although well-written, the paper is not accessible to most machine learning audiences...*
It is true that a background in the relevant mathematics is useful for fully understanding our work. However, we do not assume that all readers possess this knowledge; references are left for those seeking more background, and the paper has been written so that non-experts can follow the main ideas at a high level.
*2. Mathematical terms such as algebraic brackets, Poisson brackets, degenerate metric brackets, etc. should be defined in the beginning of the paper, or with a reference to a textbook or other paper defining them...*
Note that the first set of requested terms are defined in the second paragraph of page 2, and the second in Section 3.1, albeit in a terse way because of the space requirements. Since this was not immediately clear, we have attempted to include a few more details and citations in the revised manuscript.
**Questions**
*1. NMS is a more efficient parametrization than GNODE and GFINN. Is it also more efficient computationally (end-to-end)?*
This is a good question. While NMS is not epoch-to-epoch competitive with the structure-agnostic NODE in cost, we observe that it does perform better than GNODE and GFINN for the same number of parameters. This is illustrated in Figure 3b in Appendix D.
*2. Are there non-metriplectic methods that are suitable for comparison in the experiments?*
As mentioned above, we plan to include a comparison to Hamiltonian NNs in the revised manuscript.
*3. What is the motivation for metriplectic methods overall — do they have drawbacks relative to methods that do not preserve energy/increase entropy? It would be helpful to include these in related work.*
Metriplectic methods are designed to encode the first two laws of thermodynamics in a way that is suitable for modeling physical systems with dissipation. By capturing dissipative phenomena through entropy gain, metriplectic models are interpretable and thermodynamically closed. Their primary drawback is theoretical: it is difficult to write physical systems in metriplectic form, and there is no general algorithm for doing so. This is what motivates machine learning approaches like NMS, which remove the need for this difficult step. For clarity, we have added a few more details about this in the revised manucript.
---
Rebuttal Comment 1.1:
Title: Thanks for the response
Comment: Thanks to the authors for their response. I retain my low confidence rating and concern about the understandability of the paper, but will increase my score. | null | null | Rebuttal 1:
Rebuttal: Thanks to the reviewers for their helpful feedback and interest in our article. We are pleased to hear that all reviewers consider it a valuable contribution to the field of structure-preserving machine learning.
In response to reviewer dk6W's comment, we are attaching a set of figures for the double pendulum example. Here, we show the trajectories of the ground-truth and predicted state variables, entropy, and energy. Note that the proposed NMS method produces more accurate predictions than previous methods while preserving the metriplectic structure of the system.
Additionally, individualized responses to each review are left below. Please let us know if we can do anything further to aid you in your final decision.
Pdf: /pdf/2d39f66fd7031e0afcec768ab79f48158190835a.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Privacy without Noisy Gradients: Slicing Mechanism for Generative Model Training | Accept (poster) | Summary: The manuscript introduces a slicing privacy mechanism for training generative models without noisy gradients. This mechanism injects noise into random low-dimensional projections of private data, providing strong differential privacy guarantees. The study introduces the smoothed-sliced
$f$-divergence and a kernel-based estimator for it, allowing the training of generative models without adversarial training. Empirical results show improved synthetic data quality compared to existing methods.
Strengths: - The introduction of the slicing privacy mechanism and smoothed-sliced $f$-divergence represents a novel approach to training privacy-preserving generative models, addressing the limitations of noisy gradient-based methods.
- The paper provides strong theoretical foundations for the proposed methods, including privacy guarantees and statistical consistency of the smoothed-sliced $f$-divergence.
- Extensive experiments demonstrate the method's effectiveness in generating high-quality synthetic data across various datasets, outperforming baseline methods.
Weaknesses: - This is a very solid piece of work. The proposed method is simple yet effective, and I have no particular concerns or issues with it.
Technical Quality: 3
Clarity: 3
Questions for Authors: None
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the thoughtful review and for appreciating the merits of the work! If there are any further questions that might lead to improving your score please let us know. | Summary: The paper proposes to add noise to randomly projected private data along with optimizing a newly proposed metric smoothed-sliced f-divergence to train generative models. Such paradigm can circumvent adding noise to gradient and enable more architecture choices. Experiment show the proposed method perform competitively with existing methods.
Strengths: 1. The proposed approach directly add noise to data, thus enable more data processing possibilities on privatized data.
2. The proposed approach is shown to have consistency and perform competitively with baseline algorithms such as DP-SGD, PATE, and SliceWass.
Weaknesses: 1.I feel thee baseline algorithms are kind of weak for the setting considered in the paper (synthetic tabular data generation). I am not an expert in this specific area but SliceWass is proposed 3 years ago, DP-SGD and PATE are classical DP algorithms which have many variants improving upon them in recent years, e.g., Adam also has DP versions.
2. The algorithm is proposed for generative model training, at least it is what claimed in the title and abstract, but it is only tested on tabular data.
Technical Quality: 3
Clarity: 2
Questions for Authors: 1. Have the authors tested the algorithm on other type of data, such as image or text?
2. Do the authors have explanations why the proposed approach could be better than the baseline algorithms?
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: I did not find any limitation discussion specifically for the proposed approach. Adding it would be beneficial.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate the reviewer’s careful reading of our paper and thoughtful comments!
---
**Q1. Weak baselines**
A1. We have added experimental comparisons to the state-of-the-art MERF method in the attached pdf. In Figure 1 of the attached PDF, we have considered private image generative modeling for MNIST, showing that our approach outperforms the state-of-the-art baseline MERF method for larger privacy budgets. In this experiment, note that MERF, as implemented by the authors thereof, only uses the mean embedding of the real data in each class. This allows them to use much lower noise and achieve better results for low epsilon levels, but damages performance for higher privacy budgets since their approach cannot learn the diversity of each class’s distribution (due to taking the mean only). This explains the trends we see and emphasizes the advantages of our approach that requires no such limiting preprocessing step and can retain the full diversity of the data distribution.
In Table 1 of the PDF, we also add MERF as a baseline to the tabular data experiment in the main text, showing that our proposed method (often significantly) outperforms MERF in 20 out of the 24 metrics across five datasets, including all 10 categorical metrics, 4 out of 5 classifier F1 scores, and 6 ouf of 9 numerical metrics. We used the implementation at https://github.com/ParkLabML/DP-MERF for this experiment.
---
**Q2. Other types of data.**
A.2 See the response to the question above, where we expand our treatment of image data (in addition to the domain-adaptation experiments in the original submission for image data).
---
**Q3. Why the proposed approach could be better than the baseline algorithms**
A3. Our intuition is that training generative models can be very unstable and requires careful design of the model architecture. Traditional approaches like DP-SGD, PATE, and their variants ensure differential privacy (DP) by modifying the training algorithm. This modification makes the model hard to converge due to gradient clipping and the noise added to the gradient, leading to potential issues like mode collapse, especially since hyperparameter tuning is challenging under DP constraints. In contrast, our algorithm separates DP from the generative model training by adding noise to low-dimensional projections of real data, which are then used to train the generative models. Although the noisy data may still affect the quality of the synthetic data, hyperparameter tuning becomes much easier, and any optimizer (e.g., SGD) can be applied effectively to optimize the generative model.
---
**Q4. Discussions about limitations.**
A.4. Thank you for highlighting this matter. In response to your concerns, we provide a discussion on the limitations and potential future directions below. We will ensure this discussion is incorporated into the revised paper.
In this paper, we introduce a generic framework for training generative models with differential privacy guarantees. We demonstrate the efficacy of our method through numerical experiments on both tabular and image data. It would be interesting to extend our framework to other types of data, such as natural language processing and time-series data. We establish an asymptotic statistical consistency guarantee for our algorithm. An interesting direction for future research would be to derive finite sample guarantees, specifically sample complexity bounds for our method. Additionally, we believe it is crucial to investigate the capabilities and limitations of private synthetic data from various perspectives. For instance, if a machine learning model trained on synthetic data exhibits algorithmic bias when deployed on real data, it is important to identify the source of this bias and correct the model.
We hope the above answers address your concerns, and if any questions remain, please let us know! | Summary: This paper proposes a DP generative modeling technique via f-divergence and random projection. Specifically, both real data and synthetic data are randomly projected into a lower-dimensional space, where the noise is added to the aggregation of the projected data such that the effect of individual data point is bounded. Minimizing the smoothed-slicing f-divergence between the noisy embeddings will guarantee the match of two distributions, thereby can be used to train a generative model. Compared to adding noise to gradients (e.g. DP-SGD), this method is more convenient and scalable. Experiments show that it generally outperforms prior related methods in synthesizing tabular data.
Strengths: 1. Looking for better alternatives to DP-SGD is an active research area to circumvent the common issues DP-SGD has in practice, thus is well-motivated and of wide interest in the related field.
2. The method is complete, with full description and theoretical privacy analysis.
3. Empirically, multi-dimensional evaluation metrics are considered, and the proposed method outperforms other alternatives in most metrics.
Weaknesses: 1. The proposed method is similar to the MMD-based methods (references on line 96) in the following way:
+ MMD-based methods apply MMD as the divergence measure, since MMD(P, Q)=0 iff P=Q, similar to $D_f$(P, Q)=0 iff P=Q, thus both divergence measure can be used to train a generative model.
+ MMD-based methods minimize the distance between kernel mean embeddings (KME) of real data and generated data, while noise is added to the KME to retain DP guarantee. In this work, both real and generated data are projected to an embedding space, and noise is injected into the embeddings to retain DP guarantee, so the idea of the overall paradigm is quite similar.
+ KME in theory is an infinite-dimensional embedding, which is not convenient to add Gaussian noise, so those works propose to approximate KME by projecting it into a finite-dimensional space. In this work, in theory you have infinitely many directions to slice where only a finite number ($m$) of directions are randomly chosen. So both of you apply the finite-projection idea and therefore both of you will have approximation errors.
In light of the above similarity, the current discussion on the difference and association with MMD-based methods are not adequate. Also, it is more ideal to include some of those methods in the empirical comparison (Table 1), because they ran experiments on tabular dataset as well.
2. For a work in DP, I did not even find sensitivity analysis, which is crucial for proving DP guarantees. Even if it is trivial it should be included for completeness.
3. It looks to me that adding noise to synthetic data (eq. 2) is unnecessary, and will hurt the model utility. Can you please explain why?
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Existing methods move to adversarial dual form due to "...they often suffer from scalability issues and are not friendly to gradient-based optimization..." Now you circumvent the adversarial training, then does the proposed method suffer from scalability issues or not friendly to gradient-based optimization?
2. Maybe I am out of the field, but what does "slicing and smoothing" (line 116) exactly mean? What are the differences? And you term the proposed divergence "smoothed-sliced f-divergence", what does "smoothed-sliced" mean?
3. In remark 1 the authors claim that [RL21] did something wrong in their derivation because the slicing matrix U is not included in the privacy mechanism. I am not familiar with [RL21], but is U independent of the dataset X? Why will U affect the privacy analysis?
4. Line 221, "...we can achieve a tighter privacy bound by reducing a factor of..." How this factor is derived?
4. The proposed method seems quite universal, i.e. should be scalable to other domains like image datasets as well, is it true?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: NA
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: PART 1
We thank the reviewer for the thoughtful comments and for appreciating the novelty of the work!
---
**Q1. Comparison with MMD-based methods.**
A1. Indeed, this is a great point! As mentioned in the general response, we have added new experiments to address this.
In Figure 1 of the attached PDF, we have considered private image generative modeling for MNIST, showing that our approach outperforms the state-of-the-art baseline MERF method for larger privacy budgets (MERF is an MMD-based method). In this experiment, note that MERF, as implemented by the authors thereof, only uses the mean embedding of the real data in each class. This allows them to use much lower noise and achieve better results for low epsilon levels, but damages performance for higher privacy budgets since their approach cannot learn the diversity of each class’s distribution (due to taking the mean only). This explains the trends we see and emphasizes the advantages of our approach that requires no such limiting preprocessing step and can retain the full diversity of the data distribution.
In Table 1 of the PDF, we also add MERF as a baseline to the tabular data experiment in the main text, showing that our proposed method (often significantly) outperforms MERF in 20 out of the 24 metrics across five datasets, including all 10 categorical metrics, 4 out of 5 classifier F1 scores, and 6 ouf of 9 numerical metrics. We used the implementation at https://github.com/ParkLabML/DP-MERF for this experiment.
---
**KME in theory is an infinite-dimensional embedding, which is not convenient to add Gaussian noise, so those works propose to approximate KME by projecting it into a finite-dimensional space. In this work, in theory you have infinitely many directions to slice where only a finite number (m) of directions are randomly chosen. So both of you apply the finite-projection idea and therefore both of you will have approximation errors.**
Note that using a finite number of slices is a very different form of approximation than simply projecting onto a finite-dimensional space. As a result, our approach is much more amenable to theoretical guarantees. Firstly, in our setting, any errors from finite numbers of slices can be easily controlled since the slices are independent and identically distributed, while errors from truncating an infinite-dimensional space to a few dimensions is difficult to control theoretically. Secondly, note that our total projection dimension (number of slices X slice dimension) is actually fairly large, oftentimes on the order or larger than the ambient dimension of the data itself. This should imply that the loss of distributional information due to slicing is quite minimal indeed.
---
**Q2. Sensitivity analysis**
A.2. You are absolutely right. In most DP work, particularly when noise is added to a statistical query, a sensitivity analysis lemma is derived to prove the DP theorem. However, our privacy mechanism differs as it is not an additive mechanism; instead, it uses Gaussian noise to randomly project data into a lower-dimensional space and then adding noise. We established our (Renyi)-DP guarantee by directly bounding the Renyi divergence (Lemma 2 in Appendix), and leveraging a prior result that bounds traditional epsilon-delta differential privacy in terms of Renyi-DP. This strategy significantly simplifies our proof. Similar ways for proving DP theorem (without a sensitivity lemma) have been used in other privacy mechanisms, such as in the DP proof for the Johnson-Lindenstrauss transform in [BBDS12].
---
**Q3. Adding noise to synthetic data (eq. 2) is unnecessary and will hurt the model utility**
A3. Adding noise to synthetic data may seem counterintuitive, but it is essential for our approach to ensure that the learned model is consistent with the true data distribution. Recall that we minimize the loss function, specifically the smoothed-slicing f-divergence (SD), between the distributions of synthetic and real data. Proposition 1 states that SD = 0 iff the real and synthetic data distributions perfectly match. This property holds only if we apply the same amount of noise to both the real and synthetic data distributions.
To illustrate this intuition, consider an example: let the real data follow a 1D normal distribution $N(0,1)$ and the synthetic data follow $N(0, \sigma)$ with a trainable parameter $\sigma$. If we add noise only to the real data distribution, for instance, Gaussian noise $N(0, \sigma_{\text{noise}})$, minimizing our loss function to zero will lead to the synthetic data having a greater variance than the real data: $\sigma = 1 + \sigma_{\text{noise}} > 1$.
---
**Q4. Does the proposed method suffer from scalability issues or not friendly to gradient-based optimization?**
A.4. Our proposed method should indeed not suffer from these issues, as we circumvent the need for noisy gradients and our objective function is well-behaved and not a min-max problem.
---
**Q5. What does "slicing and smoothing" (line 116) exactly mean?**
A5. “Slicing” refers to (randomly) projecting high-dimensional data into lower-dimensional spaces, like slicing a loaf of bread into thinner pieces. “Smoothing” refers to adding Gaussian noise, which makes the resulting distribution less peaked and more spread out.
We term the proposed divergence “smoothed-sliced f-divergence” since it (randomly) projects the original and synthetic data distributions onto lower-dimensional spaces (i.e., slicing), followed by adding isotropic Gaussian noise (i.e., smoothing), and averaging their f-divergence over all projections.
---
Rebuttal Comment 1.1:
Comment: Thanks for the reply. Discussion on Q1 needs to be included in the revision. I am generally satisfied with the responses except Q2.
I disagree with the answer to Q2. I don't get why eq(3) is not an additive mechanism. For the papers you mentioned, both [BBDS12] and [EKKL20] made assumptions on the sensitivity of the output, i.e. <= 1, whereas I don't find similar assumptions in this paper. In fact, the random slicing UX is unbounded. It looks to me that you need to clip the output and then add V that is scaled with the sensitivity.
---
Reply to Comment 1.1.1:
Title: Response and clarification on bounded norm
Comment: Thank you for your prompt response! We are glad to hear that you are generally satisfied with our responses. We will ensure that the discussion on Q1 and the experimental results are included in our revised paper.
Regarding Q2, you are absolutely right that UX would be unbounded without additional assumptions and privacy guarantees would not hold, we apologize for misunderstanding the precise point of your previous statement of Q2. To clarify---our theorem explicitly requires each record in X to have a norm <= 1 (please refer to Line 145 in our paper, and Lines 659–664 for a discussion on how we satisfy this assumption in our experiments). Given this, UX will be bounded (with high probability) since U is a Gaussian matrix with a specified covariance matrix. Hence the sensitivity is immediately controlled allowing us to compute the correct variance of the additive Gaussian noise applied after slicing to achieve the desired privacy guarantees. While our proof strategy goes through Renyi divergence (crucially using the norm bound assumed for X) rather than through a more explicit sensitivity analysis for the linear U mechanism, the principle is fundamentally the same.
Similarly, as you pointed out, the privacy analyses in [BBDS12, EKKL20] rely on the same bounded norm assumption. In the revision, we will add a comment pointing out that this norm bound is key and analogous to sensitivity discussions in other works.
---
Rebuttal 2:
Title: Part 2
Comment: ---
**Q6. In remark 1 the authors claim that [RL21] did something wrong in their derivation because the slicing matrix U is not included in the privacy mechanism. Is U independent of the dataset X? Why will U affect the privacy analysis?**
A.6. Yes, U is generated independently of the dataset X, but the sliced output UX is clearly dependent on U, hence conditioned on (UX), U and X are now dependent! Since the generative model training requires both the projected data and the projection directions (i.e., the U matrix must be known to the model during training), not accounting for the fact U is known can result in privacy leakage.
As an analogy, let’s consider the simple additive noise setting, where the output from a Laplace mechanism is used. In such a case, any downstream operation must not access the Laplace noise added by the privacy mechanism. If it did, the noise could be used to denoise the answer, even if the noise is independent of the real data. This is a crucial oversight, which we remedy by explicitly including all needed factors in the privacy analysis.
---
**Q7. Line 221, "...we can achieve a tighter privacy bound by reducing a factor of..." How this factor is derived?**
A7. The intuition is that a deterministic U corresponds to a scenario where the adversary can engage with the design of the privacy mechanism and choose how to project the data. This grants the adversary more freedom to influence the design of the privacy mechanism, resulting in a less effective privacy protection.
Technically, we derive this factor by comparing the Renyi divergence between two privacy mechanisms, where one has random U and the other has deterministic U (see Proposition 3 in the appendix).
---
**Q8. Is the proposed approach scalable to other domains like image datasets as well?**
A.8. Thank you for bringing up this concern. Indeed, our proposed approach is applicable and scalable to other domains, including image data. For instance, please refer to our domain adaptation experiment (Section 4.2), where we apply our method to MNIST and USPS datasets.
In Figure 1 of the attached PDF, we also have considered private image generative modeling for MNIST, showing that our approach outperforms the state-of-the-art baseline MERF method for larger privacy budgets. In this experiment, note that MERF, as implemented by the authors thereof, only uses the mean embedding of the real data in each class. This allows them to use much lower noise and achieve better results for low epsilon levels, but damages performance for higher privacy budgets since their approach cannot learn the diversity of each class’s distribution (due to taking the mean only). This explains the trends we see and emphasizes the advantages of our approach that requires no such limiting preprocessing step and can retain the full diversity of the data distribution.
In Table 1 of the PDF, we add MERF to the experiment in the main text, showing that our proposed method (often significantly) outperforms MERF in 20 out of the 24 metrics across five datasets, including all 10 categorical metrics, 4 out of 5 classifier F1 scores, and 6 ouf of 9 numerical metrics. We used the implementation at https://github.com/ParkLabML/DP-MERF for this experiment.
We hope the above answers clarify the reviewer’s view of our work, and if any questions remain, please let us know! | null | null | Rebuttal 1:
Rebuttal: General response:
We thank the reviewers for their thoughtful and helpful comments. Due to the suggestions of several reviewers, in the attached pdf we have included results for two new experiments showing the advantage of our methods over the state-of-the-art MMD-based MERF method. We will add both of these experiments to the revision.
Firstly, in Figure 1 of the attached PDF, we also have considered private image generative modeling for MNIST, showing that our approach outperforms the state-of-the-art baseline MERF method for larger privacy budgets. In this experiment, note that MERF, as implemented by the authors thereof, only uses the mean embedding of the real data in each class. This allows them to use much lower noise and achieve better results for low epsilon levels, but damages performance for higher privacy budgets since their approach cannot learn the diversity of each class’s distribution (due to taking the mean only). This explains the trends we see and emphasizes the advantages of our approach that requires no such limiting preprocessing step and can retain the full diversity of the data distribution.
In Table 1 of the PDF, we add MERF as a baseline to the tabular data experiment in the main text, showing that our proposed method (often significantly) outperforms MERF in 20 out of the 24 metrics across five datasets, including all 10 categorical metrics, 4 out of 5 classifier F1 scores, and 6 ouf of 9 numerical metrics. We used the implementation at https://github.com/ParkLabML/DP-MERF for this experiment.
Pdf: /pdf/b3d86366fd974e401a45a5d872576533d1e8ad75.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Diffusion-based Layer-wise Semantic Reconstruction for Unsupervised Out-of-Distribution Detection | Accept (poster) | Summary: This paper focuses on out of distribution detection. The detection is based on the reconstruction error. The authors use a diffusion model as their generative model and focus on the feature space instead of the original images. The authors test their proposed methods on several different datasets.
Strengths: 1. The overall presentation of this paper is good. It is very easy to follow.
2. Have conducted experiments on different datasets.
Weaknesses: There are several concerns for this paper.
1) The novelty of this paper is very limited. It looks like it simply replaces a generative model with the diffusion model.
2) The current version only presents the "what"---for example, the model includes three components (multi-layer semantic feature extraction, diffusion model for feature reconstruction, and OOD detection head)---but not the "why". More explanations and theoretical analysis may need.
3) The evaluation metrics may be problematic. Due to the natural of the OOD problem, the dataset would be highly imbalanced. In addition to the AUROC and FPR95, other metrics that could capture the imbalance characteristics should be used.
4) The current ID and OOD are very different (from different datasets), what would happen if the difference between ID and OOD is not so significant? For example, both ID and OOD come from the same dataset (e.g., CIFAR10) but OOD is curated by adding some distortions.
Technical Quality: 2
Clarity: 3
Questions for Authors: Basically, my major concerns include: 1) why it works? 2) the experiments. Please see [Weaknesses] for more details.
Confidence: 4
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: 1. Novelty of this paper
Thanks for the valuable comments. I believe the reviewer may misunderstand our contributions on OOD detection by stating that “It looks like it simply replaces a generative model with diffusion model”. Our main novelty lies in the following three aspects:
Firstly, we are the first to successfully incorporate generative modeling of features within the framework of OOD detection in image classification tasks. This demonstrates that when using diffusion models for OOD detection in such downstream tasks, it is not necessarily required to operate in the pixel space.
Secondly, we devise a multi-layer semantic feature reconstruction mechanism. Performing feature reconstruction on top of the multi-layer semantic features, encourages to restrict the in-distribution latent features distributed more compactly within a certain space, so as to better rebuild in-distribution samples while not reconstructing OOD comparatively. As a result, the projected in-distribution latent feature space should be compressed sufficiently to capture the exclusive characteristics of ID images, while it also provides sufficient reconstruction power for the large-scale ID images of various categories with the high-level semantic features.
Thirdly, the proposed Latent Feature Diffusion Network (LFDN) is built on top of the feature level instead of the traditional pixel level, which could significantly improve the computation efficiency and achieve effective OOD detection.
2. More explanations and theoretical analysis may need.
Thanks for the valuable comments. We will improve these descriptions in the camera-ready version.
As illustrated in Figure 1, the proposed framework consists of the following three components:
(1)The multi-layer semantic feature extraction module, which sets up a comprehensive and discriminative feature representation for each image, and could help to better rebuild the samples and encourage the in-distribution features distributed more compactly within a certain space from different semantic layers.
(2) The latent feature diffusion stage, which introduces the DDPM to model the multi-layer semantic features. It builds a latent feature diffusion network to reconstruct the semantic features from their distorted counterparts.
(3) The OOD detection head, which utilizes different evaluation metrics (i.e., MSE, MFsim and LR metric), to measure the reconstruction error. Finally, we can use the reconstruction error to justify whether the input sample belongs to ID or OOD.
The success of our method lies in that, the projected in-distribution latent feature space can be compressed sufficiently to capture the exclusive characteristics of ID images, while it also provides sufficient reconstruction power for the large-scale ID images of various categories.
Besides, to further illustrate the effectiveness of the proposed method, **we also compare the MFsim score distributions between the initial model and the final model after training. We can clearly see that the reconstruction errors of the ID data tend to decreasing as the model training, and the final scores can be distinguished from the OOD data.**
3. In addition to the AUROC and FPR95, other metrics that could capture the imbalance characteristics should be used.
Besides the commonly used AUROC and FPR95 evaluation metrics, we also adopt the F1-score and AUPRC evaluation metrics which could capture the imbalance characteristics of the dataset, for comprehensive experimental comparison. As shown in the table below, we compare the proposed method with some of the latest representative Classification-based methods under the F1-score and AUPRC evaluation metrics, respectively.
Comparison of F1-Score
|ID|Method|SVHN|LSUN-c|LSUN-r|iSUN|Textures|Places365|**average**|
|-|-|-|-|-|-|-|-|-|
|CIFAR10|MSP|48.20|80.88|75.74|75.85|81.75|75.90|73.05|
||EBO|46.76|92.72|81.93|81.30|82.53|81.17|77.74|
||ASH-S|67.44|93.55|90.99|91.33|83.12|82.39|84.80|
||ours(+MSE)|79.21|91.04|83.16|82.93|97.43|97.43|88.53|
||ours(+LR)|86.18|91.85|85.74|85.03|97.43|97.43|90.61|
||ours(+MFsim)|91.91|97.42|95.20|94.71|97.43|97.43|**95.68**|
|CIFAR100|MSP|47.92|72.47|70.29|72.08|78.51|68.02|68.22|
||EBO|46.81|75.34|70.91|72.39|78.48|67.84|68.63|
||ASH-S|44.35|74.83|67.94|69.86|78.20|68.79|67.33|
||ours(+MSE)|49.80|73.75|67.56|70.11|97.43|97.43|76.01|
||ours(+LR)|52.39|74.05|68.75|71.13|97.43|97.42|76.86|
||ours(+MFsim)|65.48|96.52|87.39|87.20|97.43|97.43|**88.58**|
We provide the average AUPRC values across six different datasets in Table I of the PDF attached in the global rebuttal.
We can clearly see that our method also obtains the best performances under all the settings.
Besides, to analyze the effect of the imbalanced dataset, we design different ratios of ID and OOD data samples for evaluations. As shown in the table below, the detection accuracy tends to decreasing as the sample ratio (ID: OOD) gets larger.
|ID|ID:OOD Ratios|**average F1-Score**|
|-|-|-|
|CIFAR10|1:0.1|97.39|
||1:0.5|97.29|
||1:1|97.23|
||1:5.7|95.68|
|CIFAR100|1:0.1|97.25|
||1:0.5|96.47|
||1:1|95.49|
||1:5.7|88.58|
4. What would happen if the difference between ID and OOD is not so significant?
To analyze the effect of the difference between ID and OOD on the model performance, we also artificially design the following two experiments:
1) The ID dataset is from CIFAR10, and the OOD dataset is CIFAR100 which shares similar distributions with CIFAR10.
The results are in Table III of the PDF.
2) Both the ID and OOD datasets are CIFAR10, while the OOD dataset is CIFAR10 by adding random Gaussian noise.
The results are as follows:
|ID|OOD|Method|FPR95↓|AUROC↑|
|-|-|-|-|-|
|CIFAR10|CIFAR10(add noise)|MSP|90.23|61.63|
|||EBO|88.01|62.49|
|||ASH-S|86.95|62.16|
|||VAE|89.20|51.76|
|||DDPM|95.80|45.00|
|||ours(+MSE)|7.84|98.43|
|||ours(+LR)|16.27|96.76|
|||ours(+MFsim)|**0.14**|**99.97**|
We can clearly see that our method is superior to other methods.
---
Rebuttal Comment 1.1:
Comment: Thanks so much for your time on the rebuttal. I have carefully read it. Based on the rebuttal and other available reviews, I think most of my previous concerns have been addressed. I have raised my score accordingly.
---
Rebuttal 2:
Title: Concerns addressed
Comment: Dear Reviewer Jgzd,
Thank you very much for reviewing our paper and providing valuable feedback. We have made the best effort to address all your questions and comments. In particular, we have further clarified the rationale behind our proposed method and provided additional points to highlight its novelty. We have also introduced several metrics that can capture the characteristics of imbalanced datasets. Additionally, we demonstrated the detection performance of our method in scenarios where the ID and OOD datasets are similar, showcasing the scalability of our approach.
We sincerely hope that our responses can address all your concerns. Is there anything that needs us to further clarify for the given concerns?
Thank you again for your hard work and thoughtful review.
---
Rebuttal 3:
Comment: Dear Reviewer Jgzd,
Thank you very much for reviewing our paper and providing valuable feedback. We sincerely hope that our responses can address all the remaining concerns. Thank you again for your great help and many good questions and suggestions, which largely help improve the quality of our paper. We would like to clarify if you have further concerns.
We would like to clarify if you have further concerns. Thanks very much. | Summary: The authors introduce a method for unsupervised out-of-distribution (OoD) detection for image classification tasks. Their approach is based on the semantic reconstruction of latent features in multiple layers of an image encoder using a diffusion model and does not require any OoD data for training. Unlike existing methods, this approach operates on the feature level instead of the pixel level, leading to more efficient OoD detection due to the lower dimensionality of feature encodings. Experiments on multiple established benchmarks, such as CIFAR10 vs. SVHN, LSUN, FMNIST, Omniglot, and CelebA vs. SUN, Textures, and Places365, demonstrate the effectiveness of the OoD detection approach with respect to the evaluation metrics AUROC and FPR95.
Strengths: - Modeling the distribution of feature encodings for OoD detection is an intuitive yet established approach. However, the authors are the first to successfully incorporate generative modeling of features within the framework of OoD detection in image classification tasks.
- The paper is generally well-written and easy to follow. The modules of their OoD detection method—namely, the multi-layer feature extraction, diffusion-based feature reconstruction, and the OoD detection score—are clearly explained. In the ablation studies, the authors demonstrate the effectiveness of each of these modules.
- The proposed method outperforms the state-of-the-art across multiple datasets. In addition to the clear performance gain, the method is also fast. Particularly when compared to other types of generative models, the speedup is significant and relevant in safety-critical applications.
Weaknesses: - The comparisons in the experiments could be extended to ensure fairness. The proposed diffusion and feature-based approach is compared against other types of generative models, such as GLOW and VAE. However, these generative models are only applied at pixel level and rely on image reconstruction error as the OoD score. In my opinion, a fair comparison would involve applying all generative models to the same input, i.e., the multi-scale feature encodings. Different generative models come with their own advantages and drawbacks, and a fair comparison could better show why the diffusion model is the best choice.
- The speed comparison is incomplete. As mentioned, the other generative models are only applied at pixel level, so the time comparison is based on different inputs. Additionally, it would be interesting to see the speed gap between the proposed method and classification or distance-based methods.
- The feature-based approach relies on the feature encodings of the image encoder. From the comparison of EfficientNet-b4 and ResNet50 as encoders, it appears that OoD detection performance benefits from stronger encoders (although this is not explicitly validated). From this perspective, it is unclear what the underlying model is for the classification or distance methods. For a fair comparison, these approaches should also use EfficientNet-b4 and ResNet50 as backbones.
- There are no images to show some qualitative results. Particularly, examples of failure cases would be interesting to see.
Technical Quality: 2
Clarity: 3
Questions for Authors: - In line 142ff: “Such multi-layer features could better rebuild the samples and encourage the ID semantic features distributed more compactly within a certain space from different semantic layers.” To me it is unclear what is meant by semantic features being encouraged to be distributed more compactly. As far as I understand, the features of the encoders are only extracted and not modified during the training of the OoD detection model.
- Another insightful metric to measure OoD detection performance could be the AUPRC, in particular if there are significant less OoD examples than ID examples. The numbers for AUROC seem saturated for couple of datasets.
- The OoD detection task with the given datasets seems rather simple. It would be interesting to see the proposed method in other settings, when the original task is more challenging. Are there limitations in terms scalability?
Confidence: 4
Soundness: 2
Presentation: 3
Contribution: 3
Limitations: The authors have discussed limitations. Potential negative social impact have not been mentioned.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Q1. Comparison against other methods using the multi-scale feature encodings as the input.
A: We have made comparison of our method against AE and VAE using the multi-layer feature encodings as inputs.
For AE (AutoEncoder), we use the LFDN network without the timestep embedding, i.e., a 16-layer linear network. For VAE, we use a 5-layer linear network as the encoder and an 8-layer linear network as the decoder.
|ID||OOD|AE(+MFsim)|VAE(+MFsim)|Diffusion(+MFsim)|
|-|-|-|-|-|-|
|CIFRA10||SVHN|57.68|83.96|98.89|
|||LSUN|81.47|97.69|99.83|
|||MNIST|95.85|99.98|99.99|
|||FMNIST|79.61|98.69|99.99|
|||KMNIST|90.51|99.96|99.99|
|||Omniglot|81.50|97.69|99.99|
|||NotMNIST|81.61|99.88|99.99|
|average|||81.18|96.84|**99.81**|
|Time|Num img/s (↑)||1224.2|1179.4|999.3|
Compared to AE and VAE, the diffusion model has significant advantages when modeling complex multidimensional distributions.
Q2. Speed comparison with classification or distance-based methods.
A: The speed comparison between classification-based and distance-based methods is presented below. All experiments were conducted on an NVIDIA Geforce 4090 GPU.
|Method|MSP|EBO|DICE|ASH-S|SimCLR+Mahalanobis|SimCLR+KNN|ours(+MSE)|ours(+LR)|ours(+MFsim)|
|-|-|-|-|-|-|-|-|-|-|
|Type|Classifier-based|Classifier-based|Classifier-based|Classifier-based|Distance-based|Distance-based|Genetive-based|Genetive-based|Genetive-based|
|img/s (↑)|1060.5|1060.5|1066.3|1047.6|674.8|919.8|960.6|360.2|960.6|
The inference speed of our method based on MSE or MFsim is faster than that of distance-based methods SimCLR+Maha and SimCLR+KNN, because the computation of covariance matrix or K nearest neighbors occupies part of time. Our method is also comparable to classifier-based methods including MSP, EBO, DICE and ASH-S.
Q3. Comparison using the same image encoder as backbone.
A: The experimental results based on classification and distance methods are taken from the best results in the original papers using their optimal backbones.
We also conducted experiments to compare our method with other methods using the same image encoder. The FPR95 and AUROC metrics for methods using EfficientNet-b4 as the backbone are as follows.
We provide the average FPR95 and AUROC values across six OOD datasets: SVHN, LSUN-c, LSUN-r, iSUN, Textures, and Places365.
|ID|Type|Method|**average FPR95**|**average AUROC**|
|-|-|-|-|-|
|CIFAR10|Classifier-based|MSP|34.70|94.74|
|||EBO|17.42|96.31|
|||DICE|11.55|97.44|
|||ASH-S|12.53|96.51|
||Distance-based|SimCLR+Mahalanobis|57.52|78.87|
|||SimCLR+KNN|51.53|92.67|
||Genetive-based|ours(+MSE)|20.75±0.02|96.93±0.01|
|||ours(+LR)|17.20±0.02|97.61±0.02|
|||ours(+MFsim)|**2.51±0.01**|**99.34±0.01**|
|CIFAR100|Classifier-based|MSP|69.02|80.70|
|||EBO|84.36|79.31|
|||DICE|60.39|83.72|
|||ASH-S|51.71|83.13|
||Distance-based|SimCLR+Mahalanobis|89.75|62.71|
|||SimCLR+KNN|91.56|61.94|
||Genetive-based|ours(+MSE)|52.95±0.02|86.35±0.01|
|||ours(+LR)|51.72±0.03|89.17±0.01|
|||ours(+MFsim)|**14.78±0.02**|**97.20±0.01**|
The average FPR95 and AUROC metrics of methods using ResNet-50 as the backbone are provided in Table II of the PDF file attached in the global rebuttal.
Q4. Qualitative results.
A: We have included three types of failure cases in the PDF.
The first type, shown in Figure B, represents ID samples misclassified as OOD. It can be observed that these misclassified samples often have significant shadows and lack semantic information, resulting in high reconstruction errors and being incorrectly classified as OOD samples.
The second type, shown in Figure C, represents OOD samples misclassified as ID. It can be observed that these OOD samples have categories very similar to those of the ID samples (CIFAR-10), such as cars and ships, which are categories present in CIFAR-10.
The third type, shown in Figure D, represents OOD samples with colors very similar to the ID samples, leading to their misclassification as ID.
Q5. Explanation to more compact semantic features.
A: Due to the higher dimensionality and complexity of the feature distribution at the pixel level, our proposed multi-scale feature fusion and compression strategy significantly reduces the feature dimensions and makes the distribution more compact. This allows the sample reconstruction process to focus more on the primary features with inter-class discriminative power, rather than secondary details, thereby achieving the goal of distinguishing between ID and OOD samples.
Q6. AUPRC Measurement
A: We use the AUPRC metric to evaluate OOD detection methods based on the EfficientNet-b4 backbone model. We provide the average AUPRC values across six different datasets in Table I of the PDF. Our method achieves better performance than existing methods as well.
Q7. Experiments on more challenging datasets.
We provide experiments on more challenging datasets using EfficientNet-b4 as the encoder.
(1) We use CIFAR-10 as in-distribution data and CIFAR-100 as OOD data, which has relatively high similarity with CIFAR-10. The results are shown in Table III of the PDF.
(2) We also test OOD detection methods in experiments where ImageNet100 is regarded as the ID dataset and SUN, iNaturalist, Textures, and Places365 are used as OOD datasets. The results are shown in Table IV of the PDF.
Our method still shows significant advantages over the baselines, which demonstrates the scalability of our approach.
Q8. Potential negative social impact have not been mentioned.
A: We have discussed potential negative social impact in the appendix of our paper: As with any advanced detection method, there is a risk that the technology could be misused. For instance, surveillance applications, it could be employed to monitor individuals without their consent, leading to privacy violations and ethical concerns.
---
Rebuttal Comment 1.1:
Comment: Thank you for the detailed response. I have raised my score accordingly. | Summary: The paper addresses the problem identification of out of distribution models in an unsupervised manner. The idea is that OOD samples have the largest reconstruction error. The method is evaluated on a plethora of setups including SVHN [Netzer et al., 2011], LSUN(and variants) iNaturalist, Textures, Places365.
Strengths: - The theme is interesting. Identification of the OOD samples has many applications. Limiting factor is that the paper tests in general and synthetic setups for OOD and it does identifies a particular scenarios with OOD samples. Yet compared to other papers which focuses on synthetic evaluation, the setup in this paper is realistic.
- The paper contains innovation, although is somehow limited. To my best knowledge, which is somehow similar with prior work as presented in the paper, the idea to use reconstruction error as a measure of how OOD is a sample, is indeed novel. Limiting factors lies for instance in triplet loss, where the network is forced to do opposite: adjust weights such that samples which initially were OOD, are pushed towards the middle of the distribution. Yet turning a weak point of a training algorithm in a strong point of a different problems is, in my view, interesting and novel.
- Evaluation is strong: comparison is with strong previous works and in many setups. The evaluation is carried benchmark derivied from 10 datasets and compared against 7 strong solutions. In my view, it is far beyond minimum evaluation required to prove that a method is working.
- ablation is informative. I particularly appreciate the experiment "before and after training"
Weaknesses: - at the core, the idea is that reconstruction error points to OOD. This is new, but is in a broader family (including AE, contrastive leraning), thus some limited
- the limitation sections points only to the strength of the encoder. However the assumption that OOD samples lead to large reconstruction error is also a prior assumption (although a strong one given the evaluation), but one may identify cases where is not true.
Technical Quality: 3
Clarity: 3
Questions for Authors: None. Questions I had (e.g. other models as encoder) have been answered in extended material.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: not the case
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Q1.at the core, the idea is that reconstruction error points to OOD. This is new, but is in a broader family (including AE, contrastive leraning), thus some limited.
A: Thanks for the insightful comment. We agree that the core idea of using reconstruction error to indicate OOD is indeed related to broader methodologies, including Autoencoders (AEs) and contrastive learning. However, we believe our approach introduces a novel perspective and significant innovations in this field. Our overall framework is innovative in that it applies diffusion models to multi-scale feature layers, establishing reconstruction error from this unique vantage point. This new approach attempts to leverage the strengths of diffusion models in capturing and reconstructing complex data distributions, setting it apart from traditional AE and contrastive learning methods. We appreciate the recognition of the novelty within the broader context and are confident that our contribution offers valuable advancements to the OOD detection domain.
Additionally, we compared the performance of AE and Diffusion for multi-scale feature modeling to observe their performance differences. For AE (AutoEncoder), we use the LFDN network without the timestep embedding, i.e., a 16-layer linear network. The results are as follows:
|ID|OOD|AE(+MFsim)|Diffusion(+MFsim)|
|-|-|-|-|
|CIFRA10|SVHN|57.68|98.89|
||LSUN|81.47|99.83|
||MNIST|95.85|99.99|
||FMNIST|79.61|99.99|
||KMNIST|90.51|99.99|
||Omniglot|81.50|99.99|
||NotMNIST|81.61|99.99|
||average|81.18|**99.81**|
|Time|Num img/s (↑)|1224.2|999.3||
This further demonstrates the advantage of using diffusion for feature modeling.
Q2.the limitation sections points only to the strength of the encoder. However the assumption that OOD samples lead to large reconstruction error is also a prior assumption (although a strong one given the evaluation), but one may identify cases where is not true.
A: Thanks for the valuable feedback. We acknowledge the point regarding the prior assumption that OOD samples lead to large reconstruction errors. While this is generally a prior assumption supported by our evaluation, there are indeed scenarios where it may not hold true. Some difficult in-distribution (ID) samples may exhibit large reconstruction errors, and certain OOD samples similar to ID samples may have smaller reconstruction errors. The reconstruction errors for ID and OOD samples after training, as shown in Figure 3, also indicate this point. There is a small overlap between the reconstruction errors of some ID and OOD samples.
We also considered the detection performance of our method when the reconstruction errors are quite close. For example, the ID dataset is from CIFAR10, and the OOD dataset is CIFAR100 which shares similar distributions with CIFAR10. Below are our results:
CIFAR-10 as ID and CIFAR-100 as OOD:
|ID|OOD|Method|FPR95↓|AUROC↑|
|-|-|-|-|-|
|CIFAR10|CIFAR100|MSP|52.04|86.14|
|||EBO|51.32|86.19|
|||ASH-S|51.29|87.13|
|||GLOW|➖|73.60|
|||VAE|90.41|55.95|
|||DDPM|93.21|54.00|
|||ours(+MSE)|**48.87**|**87.54**|
|||ours(+LR)|49.48|87.24|
|||ours(+MFsim)|53.70|85.60|
CIFAR-10 as ID and noisy CIFAR-10 as OOD:
|ID|OOD|Method|FPR95↓|AUROC↑|
|-|-|-|-|-|
|CIFAR10|CIFAR10(add noise)|MSP|90.23|61.63|
|||EBO|88.01|62.49|
|||ASH-S|86.95|62.16|
|||VAE|89.20|51.76|
|||DDPM|95.80|45.00|
|||ours(+MSE)|7.84|98.43|
|||ours(+LR)|16.27|96.76|
|||ours(+MFsim)|**0.14**|**99.97**|
It can be observed that our method still achieves the best performance in these two specific scenarios. This validates the reasonableness of this prior assumption.
---
Rebuttal Comment 1.1:
Comment: Thank you for your thoughtful and detailed answer! | Summary: This paper proposes a diffusion-based layer-wise semantic reconstruction strategy for unsupervised Out of Domain (OOD) detection. Specifically, they leverage the intrinsic data reconstruction ability of the diffusion model to differentiate between the In-distribution (ID) and OOD samples in the feature space. The features/data reconstruction at the multi-layer feature spaces helps the generative OOD detection. The experiments suggest the superiority of the proposed approach compared to existing approaches.
Strengths: - The paper proposes a novel scheme for unsupervised OOD detection based on semantic features reconstruction at multiple layers.
- Building the diffusion network on top of multi-level features instead of pixel-level outputs, helps in better preserving the ID information.
- State-of-the-art performance on benchmark datasets.
Weaknesses: 1- The paper needs a writeup revision, for example the contributions and idea of the paper is repeated multiple times unnecessarily.
2- The third step of the proposed approach, I.e., OOD detection step, is not clear. The authors defines three metrics for OOD detection based on some threshold, however, the paper never mentioned those values nor provided any experimental study, that how were they selected.
3- The comparison with recent generative methods based OOD is not provided. Instead only VAE is results are compared, despite there are some approaches available ([Graham et al., 2023], [Gao et al., 2023] and [Liu et al., 20]).
4- A comparative analysis between the pixel level and feature level denoising is not performed, which is the base of the proposed approach.
5-The authors needs to clearly indicate that how the proposed approach is better than the existing pixel level approaches, both theoretically and experimentally.
6- The proposed approach, and most of the methods cited in this paper experimented with classification problems. However, they build their motivation and need based on the natural scenarios. In natural scenarios, the input images may be more complex and the task may be more difficult than classification only. A study is required to evaluate the proposed approach on complex scenes, object detection and semantic segmentation datasets, having more information instead of having a single primary object.
Technical Quality: 3
Clarity: 2
Questions for Authors: The questions are there in the weaknesses section, specifically, experimental evidence of points 2-6 in weaknesses section will significantly improve the completeness.
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: The authors mentioned about the limitations related to backbone architecture. However, the large models pre-trained on large datasets are more generalised. So, how this generalisation will effect the models performance?
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Q1. Writeup revision.
A: Thank you for your valuable feedback. We appreciate your observation regarding the repetition of the contributions and the main idea of the paper. We will address these issues to ensure clarity and conciseness.
Q2. Clarification of OOD detection step.
A: During inference, samples having higher reconstruction errors measured by MSE, MFsim, or LR are more likely to be OOD samples. FPR95 indicates the false positive rate when the true positive rate reaches 95\%. When calculating FPR95, the threshold is determined by the measurement value where the true positive rate (TPR) reaches 95%. AUROC measures the area under the ROC curve. When calculating this metric, we vary the true positive rate from xxx to xxx in step of xxx to choose the threshold. In practical usage, we can select the threshold value with the OTSU algorithm. We report the F-scores of different methods using OTSU to determine the threshold value as follows:
|ID|Method|SVHN|LSUN-c|LSUN-r|iSUN|Textures|Places365|**average**|
|-|-|-|-|-|-|-|-|-|
|CIFAR10|MSP|48.20|80.88|75.74|75.85|81.75|75.90|73.05|
||EBO|46.76|92.72|81.93|81.30|82.53|81.17|77.74|
||ASH-S|67.44|93.55|90.99|91.33|83.12|82.39|84.80|
||ours(+MSE)|79.21|91.04|83.16|82.93|97.43|97.43|88.53|
||ours(+LR)|86.18|91.85|85.74|85.03|97.43|97.43|90.61|
||ours(+MFsim)|91.91|97.42|95.20|94.71|97.43|97.43|**95.68**|
|CIFAR100|MSP|47.92|72.47|70.29|72.08|78.51|68.02|68.22|
||EBO|46.81|75.34|70.91|72.39|78.48|67.84|68.63|
||ASH-S|44.35|74.83|67.94|69.86|78.20|68.79|67.33|
||ours(+MSE)|49.80|73.75|67.56|70.11|97.43|97.43|76.01|
||ours(+LR)|52.39|74.05|68.75|71.13|97.43|97.42|76.86|
||ours(+MFsim)|65.48|96.52|87.39|87.20|97.43|97.43|**88.58**|
Q3. Comparisons with recent generative methods based OOD
A: The comparison between our method and DDPM [Graham et al., 2023] can be referred to Table 1 and 2 of our paper. Our method outperforms DDPM consistently on benchmarks using CIFAR10 or CelebA as ID data.
The comparison between our method and Diffuard [Gao et al., 2023] is provided in the table below. Results of Diffuard are taken from its original paper. Here, CIFAR10 is regarded as ID data, while CIFAR100 or TinyImagenet is regarded as OOD data. Our method based on MFsim achieves overall better performance than ‘Diffuard+Deep Ens’, with 1.55 higher AUROC and 21.77 lower FPR95.
|Method|CIFAR-100 AUROC↑|CIFAR-100 FPR95↓|TINYIMAGENET AUROC↑|TINYIMAGENET FPR95↓|average AUROC↑|average FPR95↓|
|-|-|-|-|-|-|-|
|Diffuard|89.88|52.67|91.88|45.48|90.88|49.08|
|Diffuard+EBO|89.93|50.77|91.95|43.58|90.94|47.18|
|Diffuaed+Deep Ens|**90.40**|52.51|91.98|45.04|91.19|48.78|
|ours(+MSE)|87.54|**48.87**|97.68|13.42|92.61|31.15|
|ours(+LR)|87.24|49.48|97.11|15.04|92.18|32.26|
|ours(+MFsim)|85.6|53.7|**99.88**|**0.39**|**92.74**|**27.01**|
The comparison between our method and LMD [Liu et al., 2023] is shown in the following table. The evaluation metric is AUROC. The average AUROC of our method based on MFsim is 6.94 higher than that of LMD.
|ID|OOD|LMD|ours(+MSE)|ours(+LR)|ours(+MFsim)|
|-|-|-|-|-|-|
|CIFAR10|CIFAR100|60.70|87.54|87.24|85.6|
||SVHN|99.20|97.31|98.22|98.89|
|CIFAR100|CIFAR10|56.80|70.52|72.86|64.58|
||SVHN|98.50|83.93|88.84|93.9|
||**AVERAGE**|78.80|84.83|**86.79**|85.74|
Q4. Comparisons with pixel-level denoising approaches.
A: The quantitative comparisons between our method and pixel-level approaches are as follows:
We provide the distribution differences of the MSE scores at two levels after training, with CIFAR-10 as the ID dataset and other datasets as OOD; The results are shown in Figure A of the PDF attached in the global rebuttal.
It can be observed that at the pixel level, the reconstruction error distributions of ID and OOD samples are very similar. The mixed MSE scores make it very hard to distinguish ID samples from OOD samples. However, at the feature level, the reconstruction score distribution of ID samples shows a clear distinction from that of OOD samples. The reason is that, our feature-level diffusion-based generative model makes the projected in-distribution latent space not only be compressed sufficiently to capture the exclusive characteristics of ID images, but also provide sufficient reconstruction power for the large-scale ID images of various categories.
Q5. Focusing on Classification Problems
A: Thanks for the valuable comment. Our experiments, along with most of the cited works, focus on classification problems. This focus is largely because the field of OOD detection has traditionally concentrated on classification tasks. However, we believe our method has the potential to extend beyond classification. Our approach can serve as an effective data pre-processing step for more complex tasks like semantic segmentation and object detection. By identifying and filtering out OOD inputs, our method can enhance data security and improve the performance of these models. Additionally, our method can be integrated into semantic segmentation and object detection pipelines to judge whether detected or segmented object proposals are out of distribution.
Q6. Effect of generalisation on model performance.
A generalised backbone model is crucial for ensuring the extracted features are comprehensive and representative of the samples. When modeling in multi-scale feature spaces, it's essential that the features extracted are thorough and meaningful. Small-scale models may have shallower layers, which might not fully capture the complexity of a sample. The generalization ability of large models allows them to extract a more complete and representative set of features, which is essential for the effective performance of our OOD detection method. This ensures that the model can adequately generalize across various inputs, leading to a more robust and reliable detection system.
---
Rebuttal Comment 1.1:
Comment: Thank you for the detailed response.
---
Rebuttal Comment 1.2:
Title: Queries addressed
Comment: I have revised my rating accordingly. | Rebuttal 1:
Rebuttal: We thank all reviewers for their valuable feedback, with three reviewers (yKR8, ch15 and aPqn) supporting our work. We are encouraged that reviews think our paper:
- **A novel and interesting unsupervised OOD detection scheme.** (by Reviewer yKR8, ch15, aPqn)
- **State-of-the-art performances on benchmark datasets.** (by Reviewer yKR8, ch15, aPqn)
- **Strong evaluation and informative ablation.** (by Reviewer ch15)
- **Good presentation.** (by Reviewer aPqn, Jgzd)
Concerns of reviewers are addressed in the rebuttal to each reviewer with extra tables and figures in the attached pdf document.
1.In response to Reviewer aPqn’s question on evaluation metric, we provide AUPRC to measure OOD detection performance in Table I.
2.In response to Reviewer aPqn’s concern on using same backbone, we provide the average FPR95 and AUROC values for different methods using ResNet-50 as the backbone in Table II.
3.In response to Reviewer aPqn’s concern on scalability to more challenging settings, we conduct experiments using CIFAR-10 as the ID dataset and CIFAR-100 as the OOD dataset. The results are presented in Table III. We also conducted experiments using ImageNet100 as ID dataset. The results are presented in Table IV.
4.In response to Reviewer yKR8’s question on comparative analysis between the pixel-level and feature-level denoising, we visualize the reconstruction error distribution for both pixel-level and feature-level reconstructions in Figure A. It can be observed that at the pixel level, the reconstruction error distributions of ID and OOD samples are very similar. However, at the feature level, the reconstruction score distribution of ID samples shows a clear distinction from that of OOD samples.
5.In response to Reviewer aPqn’s request on qualitative results of failure cases, we provide three groups of samples representing three main types of failure cases.
We have tried our best to answer all the reviewers’ questions about our paper. We sincerely hope that our responses can address all the concerns.
Pdf: /pdf/18c3cc022cd59b1688008bb3ca349aafc30399bd.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Learning to Decouple the Lights for 3D Face Texture Modeling | Accept (poster) | Summary: The paper presents a new face texture modeling framework by learning to decouple environmental illumination into multiple light conditions with neural representations. The proposed method can reconstruct a cleaner texture compared to previous work that use a single environment map. The proposed method is neat and reasonable. The paper is well-written.
Strengths: * The first method to tackle the challenging but important problem of face reconstruction which is neglected by most existing works.
* The method is novel and neat. After reading the introduction and going through Fig.1, I believe the proposed method can well solve the problem.
* The paper is well-written and easy to follow.
Weaknesses: * Line 119 says the n lighting coefficients are initialized differently. Can you provide some insights behind these design choices? Is the method sensitive to the initialization of lighting?
* In the proposed Human Prior Constraint, the rendered face is encouraged to be similar to a specific face in the FaceNet. Why not use the common perceptual loss as D3DFR?
* The texture of D3DFR in Figure 3 is missing. I know it reconstructs the BFM texture, which is not a diffuse albedo map. But I think it should also be provided. In addition, the FFHQ-UV's texture is presented in UV space while NextFace and the proposed methods' texture are presented in image space, I suggest presenting them all in UV space or all in image space.
Technical Quality: 3
Clarity: 3
Questions for Authors: See Weaknesses.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The limitations are discussed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1: Line 119 says the n lighting coefficients are initialized differently. Can you provide some insights behind these design choices? Is the method sensitive to the initialization of lighting?**
**A1:** Sure. The lighting coefficients are just simply initialized as a uniform distribution between -1 to 1 as illustrated in Line 119, Sec. 3.3 of the paper. Such operation can provide $n$ initialized local illuminations ranging from dark to light. During the later optimization of Stage 2, the $n$ groups of lighting coefficients are optimized towards actual illumination. Coefficients that deviate significantly from the existing illumination in the images will have masks with smaller areas, as shown in the visualized $M_N$ of Fig.2 of the main paper. These coefficients with smaller masks are deemed less important and are removed according to a predefined threshold $\epsilon$ in ACE.
Here, we also provide a ablation study for the number of $n$ used for initialization of lighting conditions below. Single images from VoxCeleb2 are used for evaluation.
| n | 3 | 5 (we use) | 7 | 9 |
|--|--|--|--|--|
|PSNR $\uparrow$ | 28.49 |29.22 |29.14 | 29.16|
|SSIM $\uparrow$ | 0.90 | 0.91 |0.91 | 0.91|
|LPIPS $\downarrow$ | 6.37 | 6.36 | 6.37 | 6.39 |
We observe that $n=3$ produces sub-optimal results, likely because the number of lighting condition candidates is insufficient to model images with complex illuminations. In contrast, $n=3, 5, 9$ yield good and similar results as there are enough initial lighting conditions, and any redundant ones are removed. The result for $n=5$ is slightly better. While introducing larger $n$ and further adjusting the hyper-parameters might improve performance, it would also increase the optimization burden. Therefore, we use $n=5$ in this work.
**Q2: In the proposed Human Prior Constraint, the rendered face is encouraged to be similar to a specific face in the FaceNet. Why not use the common perceptual loss as D3DFR?**
**A2:** Please note that the rendered faces used to calculate the Human Prior Constraint (HP) are $I_{Rs}$. As illustrated in Fig. 2 of the main paper, $I_{Rs}$ includes rendered faces under multiple predicted lighting conditions. We cannot apply perceptual loss to $I_{Rs}$ because we do not have corresponding ground truth images under the multiple decoupled lighting conditions. However, HP does not require such ground truths. Perceptual loss can indeed be applied between the final rendered result $I_{out}$ and input image $I_{in}$.
Below, we present a comparison between the perceptual loss implementation and the HP we proposed. We can see that HP still has better performances, which can confirm it provides more effective constraints for the textures through rendered faces under multiple lighting conditions.
| | Perceptual Loss | HP(ours) |
|--|--|--|
|PSNR $\uparrow$ | 28.72 |**29.22** |
|SSIM $\uparrow$ | 0.90 | **0.91** |
|LPIPS $\downarrow$ | 6.46 | **6.36** |
**Q3: The texture of D3DFR in Figure 3 is missing. In addition, the FFHQ-UV's texture is presented in UV space while NextFace and the proposed methods' texture are presented in image space, I suggest presenting them all in UV space or all in image space.**
**A3:** Thank you for your suggestion. We agree that the texture presentation space needs to be consistent. To achieve this, we transform all UV-textures to the image space. This allows textures from D3DFR and CPEM to be visualized as colored faces, even if they do not have UV-textures. Fig. 16 and 17 in the rebuttal PDF are provided in this space. We will also update the texture images in the main paper accordingly in the revised version.
---
Rebuttal Comment 1.1:
Comment: Thanks for your rebuttal, it addressed my main concerns. I will keep my score. Do you plan to release the fitting code? I think it is a great contribution to the community.
---
Reply to Comment 1.1.1:
Comment: Thank you very much for your response! We are encouraged that our rebuttal can address your concerns. We will release our source codes after we clean up them. | Summary: This paper proposes a method to recover face albedo by disentangling the input image into an albedo and shading maps (called "light conditions"). The shading maps are generated by a network which are combined with the albedo texture (generated from AlbedoMM) that is rendered under a set of $n$ lighting conditions represented using spherical harmonics. The networks for predicting the shading maps, spherical harmonics and the texture are optimized through a photometric loss along with regularizations. In order to generate meaningful masks, a binarization constraint is added on the generated masks along with an area constraint. While the final renders are significantly better than prior work, two crucial comparisons are missing i.e against the Haar-wavelet basis (https://graphics.stanford.edu/papers/allfreqmat/) and the Albedo from AlbedoMM
Strengths: 1) From my perspective, the central contribution of this paper is a somewhat 'neural' spherical harmonics (SH) representation, where the 'neural' masks help model sharp illumination effects such as shadows which SH fails to capture. This is certainly an interesting direction to explore
2) The paper is well written
3) Quantitative and qualitative results are better than prior work (especially in albedo). However, some crucial comparisons are missing.
Weaknesses: 1) The authors have not compared against optimizing in haar-wavelets (https://graphics.stanford.edu/papers/allfreqmat/) which are designed specifically to model sharp illumination dependent effects. Without this comparison, it is hard to asses the improvements the proposed model offers over classical representations. I understand that such an optimization may be compute intensive, but it is necessary.
2) There are no results shown of the initial AlbedoMM texture the texture map is initialized from. AlbedoMM already gives a relatively uniform texture map that is free from lighting artifacts, it is unclear what additional benefits texture optimization yields.
Technical Quality: 2
Clarity: 3
Questions for Authors: How many bands of SH are optimized? Did you investigate optimizing higher number of bands instead of generating the masks (again, as done in: http://graphics.stanford.edu/papers/allfreq/allfreq.pdf
Confidence: 3
Soundness: 2
Presentation: 3
Contribution: 3
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1: The authors have not compared against optimizing in haar-wavelets[1] which are designed specifically to model sharp illumination dependent effects.**
[1] Triple Product Wavelet Integrals for All-Frequency Relighting
**A1:** Thank you for your suggestion. Please note that our contribution lies not in proposing a universal neural illumination model for all scenes, but in decoupling lights to aid in recovering more accurate textures in 3D face reconstruction.
Reference [1] offers a solution for modeling environment illumination using haar wavelets for relighting. However, [1] focuses on efficient rendering, with all geometrical and texture information pre-processed and fixed. This differs significantly from the 3D face reconstruction setting, where geometrical, texture, and illumination parameters are co-optimized, making illumination optimization more challenging.
To our knowledge, haar-wavelet modeling of illumination has not yet been applied to 3D face reconstruction, and no open-source codes are available. To validate this idea, we create a baseline by replacing the environment map modeling in NextFace and NextFace* using Spherical Harmonics (SH) with haar wavelets.
The optimization of wavelets is considerably slower than SH, making it difficult to conduct quantitative comparisons on the full datasets. Therefore, we conducted a comparison on a subset of 5 pairs of single images from VoxCeleb2. The quantitative comparisons are presented below:
| | NextFace | NextFace(haar) | NextFace* | NextFace*(haar) |Ours
|--|--|--|--|--|--|
|PSNR $\uparrow$ | 21.82 | 21.84 | 22.12 | 22.12 | **26.66** |
|SSIM $\uparrow$ | 0.83| 0.84 | 0.84 | 0.84 | **0.90** |
|LPIPS $\downarrow$ | 11.12| 11.37 | 10.51 | 11.30 | **6.29**|
We observe that modeling global illumination with haar wavelets has limited improvements over the original SH on PSNR for our face reconstruction task. Although performance might be enhanced by modifying the wavelets or increasing the size of the environment map, such modifications are beyond the scope of this paper and would incur significant time costs. Optimizing such a 64 × 64 environment map from wavelets takes approximately 28 minutes on a 256 × 256 image, whereas our method takes about 340 seconds. Therefore, our method remains more efficient and effective at this time.
**Q2: There are no results shown of the initial AlbedoMM texture the texture map is initialized from. AlbedoMM already gives a relatively uniform texture map that is free from lighting artifacts, it is unclear what additional benefits texture optimization yields.**
**A2:** As demonstrated in Fig. 2 and Alg. 1 of the main paper, we use AlbedoMM to initialize the textures in Stage 2, while the textures are directly optimized in Stage 3. Therefore, the textures from Stage 2 are precisely the AlbedoMM textures.
In Section A.4 of the appendix, on page 12, we have provided a comparison between the AlbedoMM textures (Stage 2) and the final textures (Stage 3). As shown in Fig. 10 of the appendix, the texture optimization adds more details, such as beards, to the AlbedoMM textures, resulting in a more realistic reconstruction.
**Q3: How many bands of SH are optimized? Did you investigate optimizing higher number of bands instead of generating the masks again?**
**A3:** In this work, we follow NextFace [1] by using 9-band spherical harmonics (SH) to model local illumination. Additionally, we provide quantitative results of our method modeled with a single global SH using 9, 12, 15, and 18 bands, by removing the $f(\cdot)$ and $g(\cdot)$.
| B | 9 | 12 | 15|18|Ours
|--|--|--|--|--|--|
|PSNR $\uparrow$ |25.19 | 25.26| 25.27 | 25.34 |**29.22**
|SSIM $\uparrow$ | 0.87 | 0.87 | 0.87 | 0.87 |**0.91**
|LPIPS $\downarrow$ | 9.16 | 9.23 | 9.22 | 9.10 |**6.36**
We observe that increasing the number of bands in a single global SH yields quite limited improvements. A possible reason is that the external occluded shadows on human faces represent drastic illumination changes in relatively small areas, which may not be appropriately modeled as a single global SH during optimization. | Summary: This work tackles the problem of external shadows in single image face reconstruction. Specifically, when the input image contains foreign shadows, this often affects the quality of the estimated facial texture, as the external shadows often become baked into the texture or leave behind undesirable artifacts in the shadow region. The paper proposes a comprehensive solution to this problem, including a way to decouple the face image into the result of multiple lights as well as an Adaptive Condition Estimation (ACE) strategy to select which lights are present in the image. The paper further proposes multiple human face priors, such as a global prior to ensure the face texture hue is consistent with the initialization from a 3DMM model and a local prior to ensure that the smoothness of the face texture is similar to the initialization. Experiments demonstrate that the method is able to improve rendering performance on in-the-wild face images by rendering both source images with external shadows and target images without external shadows. The major improvements on target images without external shadows shows that the method is able to produce accurate facial textures that are minimally impacted by the presence of these shadows. Qualitative ablations are also provided to further aid in evaluating the method.
Strengths: The improvement in rendering performance on target images (w/o external shadows) clearly shows that the method is able to minimize the impact of external shadows from the source image. The qualitative results further support this conclusion, as the facial textures do not contain external shadow artifacts.
Qualitative ablations are available to help assess the impact of each technical contribution in this work.
This work is sufficiently novel since it enables more accurate face reconstruction under the condition of external occlusions from the source image, which is a heavily understudied problem. It also goes beyond traditional work in this area since it considers the scenario that there could be more than one light in the image and proposes a method to estimate the set of lights illuminating the face.
Weaknesses: For the ablation studies, it would be much more convincing to have tables with quantitative results demonstrating the margin of improvement of each component. Simply picking a few favorable qualitative examples is easy and not convincing.
In almost all examples in this work, the foreign (external) shadows involved are caused by hats. What about other types of foreign shadows caused by tree leaves, pens, paper, hands, etc? How are the resulting face textures in these situations? It would be nice to compare with the baselines on some images with more diverse foreign shadows. If it is difficult to find such images in the wild, you can find a small test set of 100 or so images from the paper "Portrait Shadow Manipulation (SIGGRAPH 2020)" with diverse foreign shadow shapes.
Some important citations are missing from this work, especially in the face relighting and portrait shadow removal domains. There are several face relighting methods that involve intrinsic decomposition of faces and illumination modeling, some of which involve ray tracing to handle self shadows:
1. SfSNet : Learning Shape, Reflectance and Illuminance of Faces in the Wild (CVPR 2018)
2. Face Relighting with Geometrically Consistent Shadows (CVPR 2022)
3. Learning Physics-guided Face Relighting under Directional Light (CVPR 2020)
In addition, the shadow removal domain is highly relevant to this work since a simple solution to this problem would be to perform a preprocessing step to remove the foreign (external) shadows from the image first using a portrait shadow removal method before performing face reconstruction. These methods should be cited and discussed in the paper and the authors should verify that their method outperforms this simple baseline:
1. Portrait Shadow Manipulation (SIGGRAPH 2020)
2. Unsupervised Portrait Shadow Removal via Generative Priors (MM 2021)
3. Blind Removal of Facial Foreign Shadows (BMVC 2022)
Technical Quality: 3
Clarity: 2
Questions for Authors: Please see the experiments I suggested above, especially with regard to quantitative evaluations for ablations, evaluations on images with more diverse foreign (external) shadows, and comparing with the simple baseline of running a foreign shadow removal method on the images beforehand. I will be carefully reviewing the rebuttal as well as the opinions of the other reviewers to decide if I would like to change my rating. Please also factor in missing citations, as the two areas I mentioned are very relevant to this work.
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: The authors have at least made an attempt to include a limitations section, although more analysis on failure cases would be helpful. For example, does the method fail when the external shadow covers most of the face? There is no section on potential negative societal impact, although there are certainly some potential concerns with face reconstruction methods, as with any face modeling work. The potential to edit the face and produce fake content is always present.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1: Quantitative evaluations for ablations.**
**A1:** Thank you for your suggestion. Here, we provide the quantitative ablation study of our proposed components below, following the same setting as Sec. 4.4 of the main paper. GP, LP, and HP denote our proposed constraints $L_{GP}$, $L_{LP}$, and $L_{HP}$ mentioned in Sec. 3.4, where Light and Occlusion denote the $f(\cdot)$ for combining multiple lighting conditions, and $g(\cdot)$ to remove the direct external occlusion, as claimed in Sec. 3.3, respectively.
Here is the ablation study for constraints:
| | NA | GP | GP+LP|GP+LP+HP (ours)|
|--|--|--|--|--|
|PSNR $\uparrow$ |27.20|28.81| 28.82 |29.22 |
|SSIM $\uparrow$ |0.88|0.89| 0.90 | 0.91 |
|LPIPS $\downarrow$ |8.34|6.78| 6.49 | 6.36 |
Here is the ablation study for the proposed $f(\cdot)$ and $g(\cdot)$.
| | NA | + Light ($f(\cdot)$) | + Occlusion ($g(\cdot)$)
|--|--|--|--|
|PSNR $\uparrow$ | 25.19|27.52 |29.22
|SSIM $\uparrow$ |0.87 |0.89 | 0.91
|LPIPS $\downarrow$ | 9.16|7.75 | 6.36
We can see that each component has contribution to the final performances.
**Q2: Missing citations and Discussions about relighting works [1, 2, 3].**
[1]SfSNet : Learning Shape, Reflectance and Illuminance of Faces in the Wild (CVPR 2018)
[2]Face Relighting with Geometrically Consistent Shadows (CVPR 2022)
[3]Learning Physics-guided Face Relighting under Directional Light (CVPR 2020)
**A2:** [1] proposes a framework for decomposing face attributes from in-the-wild images through physics-based rendering, while [3] predicts residuals to enhance diffuse relighting performance. [2] introduces ray-tracing to model geometrically consistent self-occlusion shadows for relighting. However, these methods primarily focus on self-occlusion shadows rather than external shadows. Due to character limitations, we will discuss these approaches in more detail in the revised version.
**Q3: Evaluations on images with more diverse foreign (external) shadows.**
[4] Portrait Shadow Manipulation (SIGGRAPH 2020)
**A3:** Thank you for your suggestion, we introduce 15 pairs of data (30 images) from [4] for evaluation under more diverse external shadows. The quantitative results are presented below, where some qualitative results are presented in Fig. 16 of the rebuttal PDF. We can see that our method still outperforms other methods under faces with diverse shadows.
| |CPEM|D3DFR|NextFace| NextFace* | FFHQ-UV | Ours
|--|--|--|--|--|--|--|
|PSNR $\uparrow$ | 23.34| 25.02 |21.83| 21.93 | 24.00 | **28.97**
|SSIM $\uparrow$ |0.84 |0.87 | 0.84 | 0.84 | 0.88 | **0.92**
|LPIPS $\downarrow$ | 9.31|9.11 | 9.48 | 9.69 | 7.82 | **7.00**
**Q4: Comparing with the simple baseline of running a foreign shadow removal method such as [4, 5, 6] on the images beforehand.**
[5] Unsupervised Portrait Shadow Removal via Generative Priors (MM 2021)
[6] Blind Removal of Facial Foreign Shadows (BMVC 2022)
**A4:** It is an interesting idea. To validate its effectiveness, we introduce the shadow removal work [5] to pre-process the face images with shadows because it is the most recent shadow-removal work with a pre-trained model.
Here are the quantitative results evaluated on data from [4].
| |CPEM|D3DFR|NextFace| NextFace* | FFHQ-UV | Ours
|--|--|--|--|--|--|--|
|PSNR $\uparrow$ | 25.56| 25.60 |23.49| 25.47 | 26.19 | **28.97**
|SSIM $\uparrow$ |0.86 |0.88 | 0.86 | 0.87 | 0.90 | **0.92**
|LPIPS $\downarrow$ |8.56|8.68 | 9.09 | 6.41 | 7.02 | **7.00**
Here are quantitative the results evaluated on single images from VoxCeleb2.
| |CPEM|D3DFR|NextFace| NextFace* | FFHQ-UV | Ours
|--|--|--|--|--|--|--|
|PSNR $\uparrow$ | 24.84| 26.78 |23.77| 24.51 | 25.35 | **29.22**
|SSIM $\uparrow$ |0.87 |0.90 | 0.85 | 0.86 | 0.91 | **0.91**
|LPIPS $\downarrow$ |10.20|7.93 | 10.47 | 9.64 | 7.62 | **6.36**
We also provide qualitative comparisons in Fig. 17 of the rebuttal PDF. The pre-processing with the shadow removal method indeed improves the performances of baselines, while our method still outperforms them.
From the qualitative results, we observe that the shadow removal model cannot fully remove the external shadows in cases where the external shadows cover relatively large regions. Although modifying the shadow removal model to be more powerful may further improve the final performances, it goes beyond the range of this work. We can explore it in the future.
**Q5: More analysis on failure cases. For example, does the method fail when the external shadow covers most of the face?**
**A5:** As shown in Fig. 16 of the rebuttal PDF, our method performs well even under external shadows covering most of the face. We present visualizations of our inferior cases in Fig. 19 of the rebuttal PDF. As mentioned in the Limitations section of the main paper, our primary limitation is the initialization with AlbedoMM. For faces with many high-frequency details, such as wrinkles, our method may lose these details during reconstruction. Replacing AlbedoMM with more powerful face representations could address this issue. We plan to explore this further in future work.
**Q6: There is no section on potential negative societal impact.**
**A6:** Sorry for the lack of such discussion. Similar as existing human face reconstruction works, our method, focusing on recovering more realistic face textures under shadows from self and external occlusions, may also lead to the ethical concerns about privacy, consent, and the spread of misinformation. We will discuss about it carefully in the revised version.
---
Rebuttal Comment 1.1:
Comment: Thank you for your rebuttal! Most of my concerns are addressed here. However, for evaluating in Q3, can we use all 100 images in that dataset? This would be more comprehensive and convincing to me than only evaluating on 15 of them.
---
Reply to Comment 1.1.1:
Title: Response to Reviewer SyuP
Comment: Thank you for your feedback. In the previous rebuttal, we presented results from 15 representative pairs due to time and hardware constraints. We have now extended our evaluation in Q3 to all 100 data pairs below.
| |CPEM|D3DFR|NextFace| NextFace* | FFHQ-UV | Ours
|--|--|--|--|--|--|--|
|PSNR $\uparrow$ | 23.31| 25.45 |22.16| 22.24 | 23.89 | **28.78**
|SSIM $\uparrow$ |0.83 |0.86 | 0.84 | 0.84 | 0.88 | **0.91**
|LPIPS $\downarrow$ | 9.93 |9.18 | 9.58 | 9.64 | 7.96 | **7.33**
We can see that our method consistently outperforms the others, further validating its effectiveness. | Summary: The paper presents a method for reconstructing 3D face textures from monocular images in the presence of occlusions, both self-occlusions and occlusions by other scene elements such as hats. The paper identifies a key limitation in existing works -- they all get impacted by the lighting changes introduced by an occluder, as they assume globally consistent lighting. The paper proposes to model the scene using a combination of different lighting for the different regions of the face. The different regions are reconstructed without direct supervision. Additionally, a mask is used to separate out the occluder pixels. Modeling multiple light sources enables the method to model appearance effects due to occluders and thus enables higher quality reconstructions than the state of the art.
Strengths: The paper is very well motivated. It identifies a key problem and motivates the solution very well.
The method is novel and simple.
The results are impressive and outperform the chosen baselines. The reconstructed masks fit very well to the shadow boundaries
Weaknesses: The main weakness is the lack of discussion and comparisons with existing works, and the lack of stronger baselines.
The paper does not mention existing research on dealing with occlusions. Here are a couple:
- Robust Model-based Face Reconstruction through Weakly-Supervised Outlier Segmentation [CVPR23]
- Occlusion-aware 3D Morphable Models and an Illumination Prior for Face Image Analysis [IJCV18]
Both these papers jointly perform 3D reconstruction and solve for occluders. They might not use texture optimization and instead perform per-vertex color optimization, but their techniques should be trivially extended for texture optimization. Comparisons to these baselines makes it impossible to evaluate this submission.
In addition, there are other strong baselines that were not considered. Generative models, such as those introduced by "Face De-occlusion using 3D Morphable Model and Generative Adversarial Network" [ICCV19] can remove occlusions from portrait images. Can the output of those methods directly be used with NextFace for high-quality reconstructions?
The other weakness relates to the exposition. The technical details of the paper were not easy to read. I did not understand how many number of masks are used, what 't' refers to in L128, how many such 't' are used. How are all the hyperparameters, such as 'n', n_L', m, epsilon, etc., chosen? I did not find a discussion. The paper is not reproducible in its current state.
Technical Quality: 3
Clarity: 2
Questions for Authors: Please refer to weaknesses.
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1: Lack of discussion and comparisons with 3D face reconstruction works dealing with de-occlusion, such as [1, 2].**
[1] Robust Model-based Face Reconstruction through Weakly-Supervised Outlier Segmentation (CVPR'23)
[2] Occlusion-aware 3D Morphable Models and an Illumination Prior for Face Image Analysis (IJCV'18)
**A1:** Thank for your reminding. References [1] and [2] propose different methods to predict masks to remove external occlusions, where they optimize 3D Morphable Model(3DMM) according to the remained regions to reconstruct de-occluded faces. We will discuss these methods in more detail in the revised version.
However, de-occlusion focuses on removing external occlusions, while **the shadows are not just occlusions**. A shadow region is actually the face under another illumination, which may include useful texture information. Simply treating shadowed regions as occlusions and removing them using de-occlusion methods may lead to the loss of face texture information in these areas. Our method, which integrates multiple lighting conditions, is more suitable for recovering textures from faces with shadows.
Here, we present a comparison with the recent open-sourced de-occlusion method [1] by extracting its 3D textures and evaluating them using the texture quality metrics as mentioned in Section 4.1 of the main paper. The quantitative results, evaluated on single images from VoxCeleb2, are presented below.
| | Deocclu[1] | Ours |
|--|--|--|
|PSNR $\uparrow$ | 25.13 |**29.22** |
|SSIM $\uparrow$ | 0.88 | **0.91** |
|LPIPS $\downarrow$ | 14.09 | **6.36** |
We also conduct comparisons on Video sequences from VoxCeleb2:
| | Deocclu[1] | Ours |
|--|--|--|
|PSNR $\uparrow$ | 25.03 |**29.15** |
|SSIM $\uparrow$ | 0.88 | **0.91** |
|LPIPS $\downarrow$ | 14.18 | **6.92** |
Our method demonstrates superior performances. We also provide qualitative comparisons in Fig. 18 of the rebuttal PDF. Our method constructs more realistic results. For example, results from [1] miss details such as beards, while our method preserves them.
**Q2: Can we use the de-occluded results of generative de-occlusion model such as [3] on NextFace for high quality reconstruction?**
[3] Face De-occlusion using 3D Morphable Model and Generative Adversarial Network (ICCV'19)
**A2:** Reference [3] trains a generative model to produce de-occluded face images from input and initial 3DMM synthesis images. We will discuss this in more detail in the revised version.
However, [3] does not provide its source code, making a direct comparison unavailable at this time. Additionally, as stated in Sec. 5.1 of [3], the model is trained on synthetic data constructed by layering common non-face objects onto faces. This approach focuses on learning how to remove these objects, without any consideration about shadows.
As we mainly discuss about **the influence of shadows from self and external occlusions** in this work, a closer idea is using 2D shadow removal methods mentioned by **Q3-Reviewer SyuP**. Such works desire to introduce generative model to recover high quality images free from shadows.
We conducted experiments using the output of the shadow removal method [4] as input to CPEM, D3DFR, NextFace, NextFace*, and FFHQ-UV to evaluate their ability to acquire accurate textures without shadows. The quantitative comparisons are presented in **Q3-Reviewer SyuP**, and we also provide qualitative comparisons in Fig. 17 of the rebuttal PDF.
Our results show that introducing the output of such generative models to the texture modeling baselines still yields inferior results compared to our method.
[4] Unsupervised Portrait Shadow Removal via Generative Priors (MM 2021)
**Q3: The technical details of the paper were not easy to read. I did not understand how many number of masks are used, what 't' refers to in L128, how many such 't' are used. How are all the hyper-parameters, such as 'n', n_L', m, epsilon, etc., chosen?**
**A3:** Sorry about the confusing parts. The number of masks are actually changing according to the illumination complexity of the image. As illustrated in Line 117~124 of the paper, we initialize $n$ separate lighting conditions, where $f(\cdot)$ predicts one mask for each Lighting condition. The $n$ masks make up $M_N$ in Fig.2. Then, the parameters of $n$ lighting conditions and $f(\cdot)$ are optimized to construct the face image, where some lighting conditions not existing in the face image will get smaller mask areas as illustrated in Fig.2 of the paper.
Then, in ACE, we drop such lighting conditions and masks with small mask areas, as explained in Line 142~147 of the paper. In this way, $n_L$ masks in $M_N$ are remained, which make up $M_L$. We can see that $n_L$ is a adaptive number less than $n$.
We select $n=5$ in this work. It means there would be 5 masks in $M_N$ at the beginning. Where $n_L<=n$ would be decided according to the subsequent optimization. Corresponding discussions could be found in **Q1-Reviewer NbDK**.
$m$ mentioned in Line 142 to 157 is not a hyper-parameter, it represents the mask value between 0 ~ 1 at a pixel position, e.g., $m=0$ at the black regions of the masks.
$\epsilon$ is a threshold to filter out the masks with quite small areas. We set it as 0.17 in this work.
$t$ is decided by the number of frames in the input image/video sequence. For the single image reconstruction, $t$ is set as a fixed 0 value, where it would be $i/k$ for the $i_{th}$ frame of a $k$ frames video sequences. In this work, we mainly conduct comparisons on 8-frame sequences as claimed in Line 203, page 7 of the paper.
We will add more details about the mentioned contents in the revised version.
---
Rebuttal Comment 1.1:
Comment: Thanks for the response, and for clarifying that some papers I mentioned were focussed only on occlusions rather than shadows. However, Robust Model-based Face Reconstruction through Weakly-Supervised Outlier Segmentation [CVPR23] deals with outliers generally, including shadows? The qualitative results in the pdf seem to show that their results are indeed robust to shadows. Their results do not capture details such as beards very well, but that is an orthogonal issue to robustness to shadows? I suspect these details can be captured by lowering the weight of the regularizer in Eq.8 of their paper (https://arxiv.org/pdf/2106.09614). Can you comment on this?
---
Rebuttal 2:
Comment: Thank you for your response! We apologize for the confusion regarding the qualitative results.
We agree that the work "[1] Robust Model-based Face Reconstruction through Weakly-Supervised Outlier Segmentation [CVPR23]" can avoid shadow effects on the textures. But **please note that it is different from our method and limited by texture details.** [1] predicts a binary mask to remove these shadow regions, while our method reconstructs them with different illuminations.
The details, e.g. beards, may be difficult to be captured by [1] because it fully relies on a **pure linear parametric 3D Morphable Model (3DMM)** to model the face textures and distinguish the occlusion regions. As claimed in Sec.3.1 of [1], it models face textures with 3DMM parameters predicted by networks.
Although 3DMM does not have explicit shadows on its textures as shown in the rebuttal PDF, it is hard to construct details, e.g. mentioned beards, as claimed in “Parametric Face Models”, Sec. 2 of [2]. Therefore, even lowering the weight of the regularizer in its Eq.8 may not be able to capture the details due to the limitation of the 3DMM model itself. It is also mentioned in the Limitation, Sec. 4.5 of [1] that a face model capable of depicting facial details, makeup, and beard is required. While incorporating such a model could potentially address this issue, doing so requires substantial effort and falls outside the scope of our paper, which we leave for future work.
[2] Self-supervised Multi-level Face Model Learning for Monocular Reconstruction at over 250 Hz
In our method, although we also initialize face textures using a parametric AlbedoMM in Stage 2, our direct texture optimization in Stage 3 allows us to add fine details to the textures, as shown in Figure 10 of the appendix in our paper. This can be achieved because we model the shadows with illumination to avoid their influence on textures. Directly optimizing the textures in [1] may not achieve this. As explained in Sec. 3.2 of [1], it treats regions which cannot be well described with 3DMM under a global illumination, e.g. the beard regions, as occlusions and learns to predict a mask to remove them. Since these regions are removed, they cannot be further used to refine the 3DMM textures through optimization.
Since we are not allowed to include more qualitative results in this comment, we will provide additional visualizations of the predicted masks from [1] in the revised version to support our claim. Furthermore, the quantitative comparisons in Q1 also confirm the effectiveness of our method, which will be discussed in greater detail in the revised version.
Thank you once again for your feedback. Please feel free to reach out if you have any further questions.
---
Rebuttal Comment 2.1:
Comment: Thanks, I now understand the distinction between ignoring the outliers in [1] and modeling them in this submission. I will raise my score.
---
Reply to Comment 2.1.1:
Comment: Thank you for your comment! We are happy that our response can address your problem. We will add the former mentioned discussions to the revised version. | Rebuttal 1:
Rebuttal: We thank all reviewers for your valuable comments. We are inspired that all reviewers recognize the sufficient motivation and good performance of our method. We apologize for any confusion caused by missing comparisons, citations, unclear definitions, and other issues. In this rebuttal, we make reply to each concern carefully, point by point.
All quantitative comparisons are based on the texture quality metric for target images as defined in Line 198-203 of the main paper. We also present qualitative results in the attached rebuttal PDF. We hope our response addresses your concerns well. Please feel free to comment if you have any further questions or suggestions about this work.
Pdf: /pdf/6ee71e7e4701583f12e674be98c52e62bd4b8877.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Learning General Parameterized Policies for Infinite Horizon Average Reward Constrained MDPs via Primal-Dual Policy Gradient Algorithm | Accept (poster) | Summary: The paper explores RL in Infinite-horizon Ergodic Average-reward constrained
MDPs under general policy parametrization. In this setting, the authors propose
a primal-dual based policy gradient algorithm that simultaneously optimizes the
policy parameter and a constraint violation penalty term $\lambda$. They also
equip their method with techniques to address challenges due to the
average-reward MDP setup, constraints on the MDP and general policy
parametrization. They prove global convergence of their resulting method in
terms of the Lagrange function and from this derive bounds on the expected
regret as well as constraint violation. Furthermore, when the policy class is expressive enough to approximately contain an optimal policy of the constrained MDP so that the un-expressivity parameter $\varepsilon_{bias}$ is zero or negligibly small, the authors prove that the
expected average regret and constraint violation of their method decreases at a rate of $O(T^{-1/5})$.
Strengths: 1. The authors consider the task of reinforcement learning
in average-reward constrained MDPs and propose novel techniques to address
underlying challenges.
2. This is a complete piece of work that explores provably
efficient RL in average reward constrained MDPs. Their claims are backed by
theoretical analysis, and they highlight the strength as well as weaknesses
of their work.
3. This submission is clearly written. The authors highlight
the required assumptions and adequately discuss the nature of most relevant
parameters.
4. This work significantly contributes to existing theory
on reinforcement learning when it is more desirable to optimize the average
return, rather than the typical discounted return, and in addition to
return optimization, the policy is required to adequately adhere to
additional constraints on the MDP.
Weaknesses: See questions.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. In Lines 243-246, the authors claim that there are scenarios under
which the un-expressivity parameter $\varepsilon_{bias}$ is zero or
negligible. I could reason about this claim when $\pi^{\*}$ is an optimal
policy in the unconstrained MDP, but unsure that the same holds when
$\pi^{*}$ is an optimal policy in the constrained MDP (see Line 240). Can
the authors kindly provide exact pointers in the referenced
papers that verify this claim, or a proof in the appendix?
2. In Lines 323 and 324, the authors claim that the primal-dual approach to policy optimization
is known to give worse results than in the unconstrained version, even for
the discounted setting. Could this be related to the fact that the method is policy, rather than value based? In the offline setting, [1] takes a value-based primal-dual approach and their result implicitly highlights that the constrained setting may not be worse off than the unconstrained setting.
3. Can the authors kindly elaborate on the bias of single
trajectory-based estimations in this setting? Reference [13] in the paper seems to
achieve better performance even with this biased estimation strategy.
4. How strong is Assumption 5?
[1] Hong, K., \& Tewari, A. (2024). A Primal-Dual Algorithm for Offline
Constrained Reinforcement Learning with Low-Rank MDPs. arXiv preprint
arXiv:2402.04493.
**Comments**
1. Line 265 makes reference to equation 21 twice rather than the
expressions after lines 263 and 264.
2. Expectation is sometimes expressed without the parenthesis. For
example, see Equations 20, 22, as well as Lines 265, 266 e.t.c.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: None. This is a purely theory paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Response to **Question 1**: Note that a constrained MDP can be considered as an unconstrained MDP where the (equivalent) reward function is $r+\lambda c$ and the optimization is performed over all policies $\pi\in \Pi$ and $\lambda\geq 0$. In other words, the original constrained optimization is equivalent to the following primal formulation: $\max\_{\pi\in\Pi}\min\_{\lambda\geq 0} J^{\pi}\_{r+\lambda c}$. Note that the solution to this max-min problem, $\pi^*$ is precisely the solution to the original CMDP. Therefore, it is natural that $\epsilon_{\mathrm{bias}}$, in this case, is defined via $\pi^*$ and $A_{\mathrm{L}, \lambda}$ where $A_{\mathrm{L}, \lambda}=A_r+\lambda A_c$ denotes the advantage function corresponding to $r+\lambda c$. A similar definition of $\epsilon_{\mathrm{bias}}$ can also be found in earlier works (e.g., see Assumption 4 in ref. [4])
Response to **Questions 2 and 3**: The remark that the primal-dual approach performs worse than the unconstrained setup was mainly targeted for general parameterization. Table 1 in the paper shows that a similar conclusion does not hold for either tabular or linear MDPs. The paper by Hong and Tewari cited by the reviewer assumes a linear MDP setup and thus is not a focus of this discussion. To understand why a gap arises between a constrained and an unconstrained case in general parameterization, one can take a look at Lemma 6. An extra $\beta$ term (dual learning rate) appears in the convergence of the first-order stationary result which is accompanied by an $\mathcal{O}(\beta^{-1})$ term during the segregation of objective optimality gap and constraint violation rate (Theorem 1). Note that the second $\mathcal{O}(\beta^{-1})$ term is absent in unconstrained case which allows us to choose $\beta = 0$ and obtain a convergence rate of $\tilde{\mathcal{O}}(T^{-1/4})$ as in [13]. However, such a choice is infeasible for the constrained case, leading to a worsening of the final result.
Response to **Question 4**:
For Assumption 5: Assumption 5 is also common (Liu et al., 2020) in the policy
gradient analysis. This is not a very restrictive assumption. We corroborate this claim by quoting (Liu et al., 2020) (where this assumption is Assumption 2.1):
``This is a common (and minimal) requirement for the convergence of preconditioned algorithms in both convex and nonconvex settings in the optimization realm, for example, the quasi-Newton algorithms [40, 41, 42, 43], and their stochastic variants [44, 45, 46, 47, 36]. In the RL realm, one common example of policy parametrizations that can satisfy this assumption is the Gaussian policy [2, 48, 19, 21], where $\pi_\theta(\cdot| s) = N(\mu_\theta(s), \Sigma)$ with mean parametrized linearly as $\mu_\theta(s) = \phi(s)^T\theta$,
where $\phi(s)$ denotes some feature matrix of proper dimensions, $\theta$ is the coefficient vector, and $\Sigma >0$
is some fixed covariance matrix. In this case, the Fisher information matrix at each $s$ becomes
$\phi(s)\Sigma^{-1}\phi(s)^T$, independent of $\theta$, and is uniformly lower bounded (positive definite sense) if $\phi(s)$
is full-row-rank, namely, the features expanded by $\theta$ are linearly independent, which is a common
requirement for linear function approximation settings [49, 50, 51].
For $\mu_\theta(s)$ being nonlinear functions of $\theta$, e.g., neural networks, the positive definiteness can still be
satisfied, if the Jacobian of $\mu_\theta(s)$ at all $\theta$ uniformly satisfies the aforementioned conditions of $\phi(s)$
(the Jacobian in the linear case). In addition, beyond Gaussian policies, with the same conditions
mentioned above on the feature $\phi(s)$ or the Jacobian of $\mu_\theta(s)$, Assumption 2.1 also holds more
generally for any full-rank exponential family parametrization with mean parametrized by $\mu_\theta(s)$,
as the Fisher information matrix, in this case, is also positive definite, in replace of the covariance
matrix $\phi(s)\Sigma^{-1}\phi(s)^T$ in the Gaussian case [11].
Indeed, the Fisher information matrix is positive definite for any regular statistical model [28].
In the pioneering NPG work [24], $F(\theta)$ is directly assumed to be positive definite. So is in the
follow-up works on natural actor-critic algorithms [39, 7]. In fact, this way, $F_\rho(\theta)$ will define a
valid Riemannian metric on the parameter space, which has been used for interpreting the desired
convergence properties of natural gradient methods [3, 32]. In sum, the positive definiteness on the
Fisher preconditioning matrix is common and not restrictive. "
We will expand on this in the revision to explain Assumption 5.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for the rebuttal.
I am satisfied with their response and have read other reviews and responses.
---
Reply to Comment 1.1.1:
Comment: Thank you for your feedback. We are pleased to hear that our response addressed your key comments. | Summary: This paper tackles the infinite horizon average reward Constrained MDP setting and proposes an analysis of regret and constraint violation under a general policy parametrization. They devise a primal-dual policy gradient algorithm achieving global optimality while ensuring sublinear bounds on the regret and constraint violation.
Strengths: The paper is interesting and well-written and places itself well in the related literature by filling the gap about infinite horizon average reward constraint MDPs under general parametrized policies.
They employ similar techniques that can be found in the related unconstrained setting but also highlight and overcome the challenges encountered while facing the constrained and average reward setting. In particular, they take inspiration from already existing techniques developed for the discounted setting and adapt them to cope with the average reward MDP.
Weaknesses: From the theoretical perspective, I do not recognize specific weaknesses.
However, I would like to highlight that the authors assume the knowledge of the mixing and hitting time of the MDP. I recognize that this is a standard assumption in this type of work, however, I wonder if they are necessary to achieve these results or if the authors are aware of similar techniques that can provide sublinear guarantees without this knowledge. Recently, the work of [1] highlighted some techniques that allow achieving global convergence results without this knowledge. Would it be possible to apply these techniques to this setting as well?
From the simulation perspective, I believe that some experimental results showing the validity of the presented approach may be beneficial for the presented work.
[1] Patel B. et al., Towards Global Optimality for Practical Average Reward Reinforcement Learning without Mixing Time Oracles, ICML 2024.
Technical Quality: 3
Clarity: 3
Questions for Authors: See the weakness section.
Confidence: 2
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Response to **Weakness 1**: Thank you for suggesting the paper by Patel et. al. where the algorithm works without the knowledge of $t_{\mathrm{mix}}$. We would like to point out that such a feat is achieved by introducing an additional assumption and weakening the convergence result. In particular, the paper solves the average (unconstrained) MDP problem via an actor-critic approach where the value function of the critic is taken to be a linear function of a feature vector. No such assumption is needed in our paper. Secondly, their approach introduces an additional critic error $T\sqrt{\epsilon_{\mathrm{critic}}}$ in the final regret, in addition to the usual approximation error $T\sqrt{\epsilon_{\mathrm{bias}}}$. Despite these drawbacks, we agree that their approach is worth exploring in the future. However, we believe that if the linear critic assumption in this paper is relaxed, the regret bounds will become even worse (a similar effect has been recently shown in the sample complexity result of discounted MDPs). We are currently not sure about the applicability of this technique to our CMDP problem.
We would also like to mention that the suggested paper became public on arXiv on March 18th for the first time, which is within two months of the NeurIPS 2024 abstract submission deadline, and thus could be considered as concurrent research. We will cite this work in the final version, and suggest extending its approaches to CMDPs as a future work.
Response to **Weakness 2**: We agree that empirical evaluations will be a nice corroboration of our established results. However, considering this is a theory-oriented paper, our goal is to propose a policy gradient-type algorithm in an infinite horizon average reward setting and establish its theoretical guarantees. A detailed evaluation will be the subject of future work.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their detailed responses, I have no further questions for the moment. | Summary: This paper studies learning in constrained MDPs with the long-run average-reward objective.
It gives the first sample complexity, regret bound and constraint violation bound in this setting with general parameterization, whereas all prior work is restricted to either tabular or linear parameterizations.
Strengths: The result and analysis look solid. The presentation is also clear.
Weaknesses: 1
The assumption that all policies induce aperiodic irreducible Markov chain is a bit strong. To what extend can you weaken the assumptions such that the current analysis in the paper still go through? Is it enough if we only assume the Markov chain is weakly-communicating?
2
Claiming that the regret is O(T^{4/5}) in table 1 is a bit misleading, because in the regret and constraint violation bounds in Theorem 2, there are actually terms linear in T whose coefficients depend on the transfer error \epsilon_{bias}.
I understand that the linear-in-T error terms due to the transfer error are common in the analysis of RL with general parameterizations, and the transfer error is zero when the parameterization is tabular.
However, it is still worth discussing whether the coefficients of the linear-in-T terms in Theorem 2 are tight and optimal, how they compare to the prior work, and whether they are truly negligible when the policy is parameterized by a neural net of a reasonable scale.
At the very least, how the regret and constraint violation are actually calculated should be clarified in a prominent part of the paper.
Technical Quality: 3
Clarity: 3
Questions for Authors: See my questions in the weaknesses section.
Confidence: 2
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: There is no negative social impact. There are some discussion of limitation in the conclusion section, but not sufficient.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Response to **Weakness 1**: Unfortunately, the proposed algorithm may not be extended to the weakly communicating (WC) setting. In general, it is difficult to propose a model-free algorithm with provable guarantees for Constrained MDPs (CMDPs) without considering the ergodic model. WC CMDPs impose extra challenges in learning compared to the ergodic case. For example, there is no uniform bound for the span of the value function for all stationary policies. It is also unclear how to estimate a policy’s bias function accurately without the estimated model, which is an important step for estimating the policy gradient. The above-stated difficulty is evident from the fact that no model-free algorithm is available in the literature that works for WC CMDPs with provable guarantees. Moreover, only a single model-free algorithm (ref. [8] in the paper) is available for ergodic CMDPs. In other words, all existing algorithms that work for WC CMDPs are model-based in nature. These approaches cannot be extended to the large state space scenario which is the premise of our paper.
Response to **Weakness 2**: Thanks for providing the suggestion. We will modify the table to point out the dependence on the linear term $T\sqrt{\epsilon_{\mathrm{bias}}}$. The transferred compatible function approximation error is a common assumption in policy-based algorithms (see ref. [10], [11] in the paper). From the sample complexity result (Theorem 1), we can see that the proposed algorithm converges to the neighborhood of the global optima within a distance of $\sqrt{\epsilon_{\mathrm{bias}}}$, which is the same result as in the literature.
---
Rebuttal Comment 1.1:
Comment: I appreciate the authors' informative response to my comments. | Summary: This paper propose a primal dual policy gradient method for solving average reward constrained MDPs. The primal problem is minimizing the usual RL objective with a averaged reward, plus a penalty induced constraint violation term. The dual problem is to find appropriate Lagrangian multiplier that balances the two objectives to reach equilibrium. The proposed method has reached the claimed regret bound and constraint violation bounds with general policy parameterization.
Strengths: The paper is well written and components are clearly explained. The presentation is good and easy to understand. The investigated topic is interesting and important for real life applications, and average reward is harder to study in general w.r.t to its discounted counterparts due to the absence of contraction of the Bellman operator under the infinity norm.
The analysis is easy to follow and plainly laid out. The basic flow of thought is easy to follow and nicely connected. The analysis itself is solid in terms of its assumptions made in the paper.
Weaknesses: There are a few points that made this paper limited in its technical contribution.
1. The claimed the general parameterization seems to rely on an accurate policy model such as neural network. In the paper the authors assumes somehow we can obtain an accurate policy for free. In reality, this is far from the truth. In this sense, the claimed contribution is not much of importance.
2. Also related to the first point, it seems that for policy evaluation (algorithm 2), the values functions are still in tabular form. Additionally, this is closely following the proposed policy evaluation method in [17] referred in the paper, which seems to the hardest part in average-reward MDPs. Existing literature shows that for policy optimization alone, it is not so different from discounted setup. See
Li, Tianjiao, Feiyang Wu, and Guanghui Lan. "Stochastic first-order methods for average-reward markov decision processes." arXiv preprint arXiv:2205.05800 (2022).
3. There is no numerical experiments to validate the proposed algorithm. There is a growing important for RL researcher to bridge the gap between theories and practice.
Technical Quality: 3
Clarity: 3
Questions for Authors: It seems uniform ergodicity is crucial in the paper. I wonder one can relax the assumption for the MDP to be unichain.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Response to **Weakness 1**: We would like to clarify that we do not assume access to an accurate knowledge of the optimal policy since that would contradict our learning objective. We merely assume that the policy belongs to a class where each member is indexed by a parameter $\theta$. The goal is to find the parameter $\theta^*$ that corresponds to the optimal policy. In reality, such a parameterized class can be modeled via a neural network where its weights can act as the index parameter. We start with an arbitrary $\theta_0$ (which can be far away from $\theta^*$) and slowly move towards the optimality using our proposed primal-dual steps.
Response to **Weakness 2**: We agree with the reviewer that the policy evaluation part is inspired by [17]. However, we would like to point out some major differences.
(1) The value estimation (Algorithm 2) is not tabular in the sense that it need not be evaluated for all state-action pairs. This routine only needs to be invoked for the pairs that are encountered by Algorithm 1 on the go.
(2) The parameter $H$ in Algorithm 2 differs from that in [17]. With the same parameter choice, our proposed policy gradient algorithm would diverge.
Additionally, the overall approach in our paper differs significantly from that of [17].
(3) In particular, our paper considers a constrained setting, which includes a primal-dual approach that is not included in [17].
(4) Finally, [17] considers a tabular setting which enjoys the strong duality. However, the general parameterization setting considered in our paper does not have the same benefit. The convergence result for the dual problem, therefore, does not automatically translate to that for the primal problem. Our novel introduction of Theorem 1 allows us to separate the objective and constraint violation rates.
Response to **Weakness 3**: We agree that empirical evaluations will be a nice corroboration of our established results. However, considering this is a theory-oriented paper, our goal is to propose a policy gradient-type algorithm in an infinite horizon average reward setting and establish its theoretical guarantees. A detailed evaluation will be the subject of future work.
---
Rebuttal Comment 1.1:
Title: Thank you for the response
Comment: I thank the authors for their response. Although I am not entirely satisfied by the author's rebuttal, the overall quality of this paper is acceptable. | null | NeurIPS_2024_submissions_huggingface | 2,024 | Summary: The paper presents a primal-dual policy gradient algorithm for infinite horizon average-reward Constrained Markov Decision Processes with general policy parametrization. It uniquely addresses minimizing regret and managing constraints simultaneously, providing the first analysis showing sub-linear bounds of O(T^{4/5}) for both regret and constraint violations. This approach extends the applicability of reinforcement learning to complex, large-scale decision-making problems under practical constraints.
Strengths: - The paper is the first to apply a primal-dual policy gradient framework to this particular setting of CMDPs, addressing a gap in the literature.
- The problem is well-motivated.
- The paper offers theoretical sub-linear bounds for both regret and constraint violations.
Weaknesses: - The assumption of ergodicity is quite strong and may limit the applicability of the proposed algorithm in practical scenarios where such conditions are not met. This contrasts with the Upper Confidence Reinforcement Learning (UCRL) framework, which operates under the more relaxed condition of weakly communicating MDPs.
- The literature review table omits the MDP modeling assumptions. Specifically, the regret bound offered by the proposed algorithm is O(T^{4/5}), while tabular constrained RL algorithms under more general multichain MDP (communicating/weakly communicating) assumptions in [6] achieve O(T^{2/3}). Related facts were not mentioned in the paper.
- The mixing time, t_{\text{mix}}, tends to depend unfavorably on the state space S and action space A, reflecting in the overall diameter of the MDPs.
Technical Quality: 3
Clarity: 3
Questions for Authors: - Can you explain in more detail the dependence of the mixing time on the state space S and action space A is critical, especially in large-scale problems. Clarification on how t_{\text{mix}} scales with S and A and its impact on the algorithm’s performance would provide a better understanding of the method’s applicability to complex environments.
- Error propagation for estimated t_{\text{mix}} : Information on the numerical performance of the algorithm, particularly how t_{\text{mix}} influences runtime and convergence in practical scenarios, would substantiate the theoretical claims. Experimental results or simulations that highlight these dynamics could significantly enhance the manuscript.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors touched upon future work/limitations on theoretical complexity
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Response to **Weakness 1**: Note that the framework of UCRL that works for Weakly Communicating (WC) MDPs is a model-based method that cannot be extended to large state space. Designing model-free algorithms, especially for Constrained MDPs (CMDPs), is an extremely difficult problem. This is evident from the fact that no model-free algorithms are available in the literature that yield provable guarantees for WC CMDPs. Only a single paper in the literature (ref. [8] in the paper) provides a model-free algorithm that works for ergodic CMDPs. Unfortunately, this paper has a worse $\tilde{\mathcal{O}}(T^{5/6})$ regret compared to our result and adopts the tabular setup. In contrast, our paper provides the first regret bound for ergodic CMDPs with general parameterization (that subsumes the tabular case) and improves the regret to $\tilde{\mathcal{O}}(T^{4/5})$. Extension of our result to the WC case will be considered in the future.
Response to **Weakness 2**: Thanks for providing the suggestion. We will add an extra column in the table that states the model assumptions used in the associated works for a fairer comparison between the algorithms.
Response to **Weakness 3**: Intuitively, $t_{\mathrm{mix}}$ indicates how fast the $t-$step transition probability converges to the stationary distribution. To the best of our knowledge, there is no known lower bound of $t_{\mathrm{mix}}$ in terms of $S$ and $A$. Therefore, it is theoretically possible that, even in an MDP with infinite states, $t_{\mathrm{mix}}$ could be finite.
Response to **Question 1**: Please refer to weakness 3.
Response to **Question 2**: We agree that empirical evaluations will help characterize the impact of $t_{\mathrm{mix}}$ estimation on the performance of the algorithm. However, since ours is a theory-oriented paper, our main goal is to establish provable guarantees for the proposed algorithm. A detailed evaluation will be the subject of future work. | null | null | null | null | null | null |
Multi-Agent Coordination via Multi-Level Communication | Accept (poster) | Summary: This paper proposed a multi-level sequential communication framework that has two communication phases. It also proved that the policies learned in this way are guaranteed to improve and converge. Instead of the observations, this paper focus on make agents communicate about the action selections. Each agent is assigned a priority of decision-making first and an equilibrium is set up to be the learning objective for all agents working together.
Strengths: This paper is well-written and well motivated, it borrows ideas from game theory and try to bring the equilibrium into multi-agent decision-making process.
Weaknesses: 1. If the authors formulate this problem as multi-agent sequential decision making problem, the Markovian property required for formulating the problem as an MDP is no longer satisfied and there is a need proving an alternative policy gradient theorem before simply inserting the Seqcomm into MAPPO.
2. In Figure 2's caption, the authors stated that 'in the launching phase, the agents who hold the upper-level positions will make decisions prior to the lower-level agents. Besides, their actions will be shared with anyone that has not yet made decisions', then how to change the structure of MAPPO model to make it fit this change? And how to adjust the time step based on this change?
3. There is no code available in this submission, the details of the models' structures could not be seen. The reproducibility remains unsure.
4. Only comparing to TarMAC is not enough, this MARL with communication algorithm was proposed in 2019, there are more recent MARL communication algorithms outperforming TarMAC.
Technical Quality: 2
Clarity: 3
Questions for Authors: See the weakness section above.
Confidence: 4
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: To begin with, we thank the reviewer for the carefully reviewing and insightful advice.
>If the authors formulate this problem as a multi-agent sequential decision-making problem, the Markovian property required for formulating the problem as an MDP is no longer satisfied and there is a need to prove an alternative policy gradient theorem before simply inserting the Seqcomm into MAPPO.
To connect SeqComm and MAPPO while avoiding this problem, we build a new MDP $\tilde{M}$ based on the original MDP $M$. The state space of $\tilde{M}$ can be divided into multiple layers and the state transitions only occur between consecutive layers. The sequential decision-making process in the original MDP corresponds to a round of agent-by-agent decision-making process while the Markovian property of $\tilde{M}$ is preserved. We show SeqComm in original MDP equals Agent-by-agent PPO in new MDP $\tilde{M}$. More detailed discussions are included in Appendix A.
>In Figure 2's caption, the authors stated that 'in the launching phase, the agents who hold the upper-level positions will make decisions prior to the lower-level agents. Besides, their actions will be shared with anyone that has not yet made decisions, then how to change the structure of the MAPPO model to make it fit this change? And how to adjust the time step based on this change?
Assume there are two agents, in the original MAPPO, the police of each agent is p(ai|s) and the value function is v(s). In our setting, the second level agent’s policy is p(a2|s, $ \emptyset $) and the value function is v(s, $ \emptyset $), we change the structure of the first level agent, its policy is p(a1|s, a2) and value function is v(s, a2). Other training procedures are the same. In practice, we use communication channels to achieve information sharing.
We did not change the time step, since after all the agents make decisions, they execute the actions to interact with the environment simultaneously. Note that we did not break the fundamental dynamic, p(s’|s, a1,a2).
>There is no code available in this submission, the details of the models' structures could not be seen. The reproducibility remains unsure.
Many methods in communication-based MARL did not share the code, which makes the comparison very hard. We also do not like this trend, and we guarantee we will release the code as soon as this paper gets accepted. We also guarantee all the results can be reproduced.
>Only comparing to TarMAC is not enough, this MARL with communication algorithm was proposed in 2019, and there are more recent MARL communication algorithms outperforming TarMAC.
We add more baselines as requested in Figure 3 (pdf in the global response). Note that many methods in communication-based MARL did not share the code, meaning many baselines cannot be compared fairly. We choose two baselines for comparison (CommFormer [1] and MAIC[2]) with two criteria, 1. Published in the recent top conference; 2. Release the code for a fair comparison.
The result shows that SeqComm still outperforms these baselines by a large margin. Surprisingly, we have tried our best to tuned the parameters under the limited timeline, but CommFormer (2024 baselines) still performs worsen than MAIC (2022 baselines).
[1]. Learning Multi-Agent Communication from Graph Modeling Perspective. ICLR 2024
[2]. Multi-Agent Incentive Communication via Decentralized Teammate Modeling. AAAI 2022
---
Rebuttal Comment 1.1:
Comment: I thank the authors for addressing my concerns, I like the idea that the code will be released. I suggest that the authors should include the discussions about the new MDP in the main body of the paper.
---
Reply to Comment 1.1.1:
Comment: Thank you for your suggestion. We will include the discussions about the new MDP in the main body of the paper and release the code. If you find our clarifications satisfactory, we kindly ask for your consideration in adjusting your scores accordingly. | Summary: This paper introduces SeqComm, a novel multi-level communication scheme for multi-agent coordination in reinforcement learning. The key contributions are:
1. A new approach treating agents asynchronously with two communication phases: negotiation and launching.
2. Theoretical analysis proving monotonic improvement and convergence of learned policies.
3. Empirical evaluation on StarCraft multi-agent challenge v2 (SMACv2) showing superior performance over existing methods.
The paper addresses the coordination problem in multi-agent settings by allowing agents to communicate actual actions and determine decision-making priorities dynamically.
Strengths: The paper provides a solid theoretical foundation with proofs for policy improvement and convergence (Propositions 1 and 2, Theorem 1). The experimental methodology is sound, using the challenging SMACv2 benchmark and comparing against relevant baselines. The ablation studies support the importance of the proposed components.
The paper is generally well-written and organized. The introduction clearly motivates the problem and contributions. The method section provides a detailed explanation of SeqComm. Figures and algorithms help illustrate the approach.
The paper presents a novel and significant contribution to multi-agent reinforcement learning. The idea of asynchronous decision-making with dynamic priority determination is original and addresses an important challenge in the field. The theoretical guarantees and strong empirical results on a challenging benchmark demonstrate the value of the approach.
Overall the strengths are:
- Novel multi-level communication scheme addressing the coordination problem
- Theoretical analysis providing performance guarantees
- Fairly strong empirical results on SMACv2, outperforming existing methods
- Ablation studies demonstrating the importance of key components
- Addresses both full and local communication scenarios
I'd also like to point out that this was run on a GTX 1050, which makes the model approachable to a general audience.
Weaknesses: - The reliance on homogeneous agents with parameter sharing may limit the applicability of SeqComm in real-world scenarios where agents often have diverse capabilities. However, it is a commonly accepted assumption.
- The approach heavily relies on a world model, but there's limited discussion on how the quality of this model affects performance, especially in more complex environments. Also the effect of the attention module is not analysed thoroughly. Attention mechanisms can be computationally expensive, especially as the number of agents increases. The paper doesn't address how this affects the overall scalability of SeqComm. It's not entirely clear how the attention mechanism is adapted for the local communication scenario, where agents only communicate with nearby agents. Furthermore, attention weights could potentially provide insights into which agents or information are most important for decision-making, but this aspect isn't explored in the paper.
- Overall, I believe a table reporting IQM values over all environments would strengthen the paper and help with reproducibility. Furthermore, plots showing wall-clock time in comparison to other models would be insightful, especially since final performance only appears to be significantly better in p_10v10 and t_10v10.
- While the paper addresses local communication, there's limited discussion on the trade-offs between communication frequency, accuracy, and overall system performance.
- Though not standard in related work either, the paper doesn't thoroughly explore the robustness of SeqComm to noisy observations, communication failures, or adversarial agents, which are common challenges in real-world multi-agent systems.
- While TarMAC is a strong baseline, the paper could benefit from comparisons with more recent state-of-the-art methods in multi-agent communication.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. How does the computational complexity of SeqComm scale with the number of agents, particularly in the negotiation phase? Is there a point where the overhead becomes prohibitive?
2. Have you explored the performance of SeqComm in heterogeneous agent scenarios where parameter sharing may not be applicable?
3. How might the method be adapted for such settings?
4. How sensitive is the approach to the quality of the world model, especially in more complex environments? What happens if the world model is significantly inaccurate?
5. The paper mentions that SeqComm can provide a proper priority of decision-making. How does this prioritization mechanism perform in highly dynamic environments where optimal decision order may change rapidly?
6. Have you investigated the potential emergence of behavioral patterns or strategies across agents due to the asynchronous decision-making process? Are there any interesting emergent behaviors?
7. How does the attention mechanism perform compared to simpler aggregation methods? An ablation study could illuminate this.
8. Does the attention mechanism learn meaningful patterns of inter-agent importance? Visualizing attention weights could provide insights into learned coordination strategies.
9. How does the computational cost of the attention mechanism scale with the number of agents, especially in the full communication scenario?
10. Could more advanced attention mechanisms (e.g., multi-head attention) provide further improvements?
11. How is the attention mechanism modified to handle the local communication scenario effectively?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: - The evaluation is limited to one type of environment (SMACv2). It's unclear how well the method generalizes to other multi-agent scenarios with different dynamics or objectives.
- The heavy dependence on a world model could be a significant limitation in environments where accurate modeling is challenging or computationally expensive.
- While the paper addresses local communication, there's limited discussion on the trade-offs between communication frequency, accuracy, and overall system performance.
- The empirical results are not significantly better in a lot of environments and changing the sight range from SmacV2 makes comparisons to previous work harder.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: To begin with, we thank the reviewer for the carefully reviewing and insightful advice.
>Computational complexity.
The computational complexity of SeqComm is mainly related to communication overhead.
For full communication, SeqComm needs more rounds, but it only transmits observation information for one time. For the rest n − 1 round communication with total (n − 1)/2 broadcasts per agent, only a single intention value and an action will be exchanged. Considering there are n! permutations of different order choices for n agents, our method has greatly reduced computation overhead since each agent needs to calculate up to n times to search for a satisfying order. Note that, in Section 3, we mentioned the cost-free communication setting. This extreme case gives us a better understanding of the benefit of communication, even if the results do not apply across all domains. Therefore, we propose a more applicable setting.
For local communication, there are only two communication rounds. One is for sharing observations and information. Another is for intention values.
>Heterogeneous agent scenarios.
We would like to point out that the agents in SMACv2 can be heterogeneous. Each map has three types of agents. Three types have different functions (Ranged and melee types), and the types are randomly refreshed. It turns out our methods can be applied to heterogeneous agent scenarios. One policy can learn different strategies for different types of agents. Note that the observation contains the information of the agent type.
>World model.
Theorem 1 provides a useful relationship between the compounding errors and the police update. As long as we improve the return under the true dynamic by more than the gap (mentioned in line 292), we can guarantee the policy improvement under the world model. If no such policy exists to overcome the gap, it implies the model error is too high, that is, there is a large discrepancy between the world model and true dynamics. Thus, the order sequence obtained under the world model is not reliable. Such an order sequence is almost the same as a random one. Though a random order sequence also has the theoretical guarantee of Proposition 2, we have shown in Section 5.2 that a random order sequence leads to a poor local optimum empirically but it still can converge.
>The prioritization mechanism
In each step, we will recompute the order based on the current situation. The proposition 2 shows that, even the order changes each step, the monotonic improvement and convergence of the joint policy in SeqComm will not be affected.
>Emergence of behavioral patterns
We have visualized some key frames to illustrate the behavioral patterns in Figure 2 (pdf in the global response). In the combat game, the unified attack on one enemy is always more effective than dispersing attacks. From frames 1-3, agents have no target until one agent (at the end of the orange arrow) is close to an enemy (the bottom right corner). In frame 4, through the negotiation phase, that agent is chosen as the highest-level agent (level 5) since it has a better position to choose one enemy to attack. After lower-level agents obtained the actions of higher-level agents (illustrated by a white dashed line), all the red units ended their random roaming and instead launched a unified attack on the blue units.
A similar behavioral pattern can also be observed in Frames 7-9.
> Simpler aggregation methods.
We have done the ablation study as requested in Figure 1 (pdf in the global response). It turns out the attention mechanism performs better than a simpler aggregation method. The reason is simple aggregation the observations will expand the dimension input of the neural network. It will impair the learning process since high-dimension input may contain many irrelevant information [1]. However, the attention mechanism helps focus on more important information by learning different weights on different observations.
[1] Learning individually inferred communication for multi-agent cooperation. NIPS 2020.
> Attention mechanism learns meaningful patterns.
We have observed two patterns. One is upper-level agents will be highlighted since important actions are provided. Another is the agents that are far away will be overlooked.
> How does the computational cost of the attention mechanism scale with the number of agents, especially in the full communication scenario?
In full communication scenarios, the computation cost increases linearly with the number of agents.
In the local communication scenarios, due to the limitation of the communication range, the number of local communicated agents will also be limited. Therefore, the communication cost has an upper bound.
>More advanced attention mechanisms (e.g., multi-head attention).
Yes, it may help to process the incoming messages from other agents. However, since the main contributions are not from the attention mechanism, we did not spend too much time investigating this. However, how the attention mechanisms or transformers architecture influences the learning process is another high-profile line of research in RL.
> How is the attention mechanism modified to handle the local communication scenario effectively?
The attention mechanism is originally used in natural language processing, where they are inherently capable of processing input of different lengths, just like recurrent neural networks.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for the extensive rebuttal and running additional results. Your replies certainly improved my outlook on the paper.
One more weakness that I'd like your comment on is:
> Furthermore, plots showing wall-clock time in comparison to other models would be insightful, especially since final performance only appears to be significantly better in p_10v10 and t_10v10.
I do not expect the wall-clock plots for each experiment, given that time is limited. However, I do believe it's an insightful comparison with baselines, as different communication techniques come with different computational costs. In the extreme, your method could take 100x the compute to achieve slightly better results. Alternatively, your method could be 100x faster, which would make the paper that much stronger. I agree that the ultimate metric is final training performance and wall-clock time is secondary, so this would not be a dealbreaker.
---
Reply to Comment 1.1.1:
Comment: Thank you for your feedback.
We are sorry we missed this important question. Since the training speed between protoss, terran, and zerg are very small within the same method, we take the results in protoss maps to illustrate the difference.
SeqComm (full comm) fps (frame per second during training):
protoss 5v5: 64.32 10v10: 32.75 10v11: 29.82
SeqComm (local comm) fps:
protoss 5v5: 108.38 10v10: 64.03 10v11: 59.55
MAPPO fps:
protoss 5v5: 166\~167 10v10: 146\~147 10v11: 136\~137
Note that all results are tested on NVIDIA A100.
Compared with MAPPO, we only need 2x to 3x the compute to achieve better results. | Summary: This paper introduces SeqComm, a novel multi-level communication scheme for multi-agent reinforcement learning. SeqComm enables agents to coordinate asynchronously, with upper-level agents making decisions before lower-level ones. The approach involves two communication phases: negotiation and launching. In the negotiation phase, agents communicate hidden states to determine decision-making priority, while in the launching phase, upper-level agents lead in decision-making and share actions with lower-level agents.
Strengths: 1. SeqComm introduces a novel communication scheme that significantly improves coordination in multi-agent settings.
2. This paper demonstrates theoretical guarantees of monotonic improvement and convergence.
3. This method outperforms existing approaches in various cooperative tasks, showcasing its effectiveness in challenging environments.
Weaknesses: 1. The assumptions regarding local observations and communications might not be realistic for all applications.
Technical Quality: 3
Clarity: 4
Questions for Authors: None
Confidence: 3
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: refer to the "weaknesses" part.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: >The assumptions regarding local observations and communications might not be realistic for all applications.
Thank you for pointing out the limitations. We agree that the settings cannot be applied to all the applications. However, our setting is more realistic compared to other full communication and global observation settings.
In more detail, local observations are widespread in the real world because of the limitations of distance, hardware, or other objective factors. For example, not all information can be measured and obtained by the sensors in the real world, which makes global observations unrealistic.
For communication, it is widely used in the real world. People use wireless communication to access the Internet. Also, the Internet of Vehicles and the Internet of Things aim to connect all things with communication by 5G, which means many works believe communication is necessary despite the difficulty in implementation.
---
Rebuttal Comment 1.1:
Comment: Thank you for your rebuttal that eliminates my concerns. I will increase my score to 7.
---
Reply to Comment 1.1.1:
Comment: Thank you for your approval, we will revise the text in conjunction with the rebuttal. | Summary: This paper introduces SeqComm, a novel multi-agent reinforcement learning (MARL) method that addresses coordination issues through sequential decision-making and multi-level communication. The main contributions include:
1. A new asynchronous perspective on MARL, allowing agents to make decisions sequentially.
2. A two-phase communication scheme: negotiation phase for determining decision priorities, and launching phase for explicit coordination.
3. Theoretical guarantees of monotonic improvement and convergence for the learned policies.
4. Empirical evaluation on SMACv2 demonstrating superior performance over existing methods.
Strengths: (1) Innovative methodology: The paper proposes a novel approach to MARL by introducing sequential decision-making and multi-level communication, which effectively addresses coordination issues.
(2) Theoretical foundation: The authors provide rigorous theoretical analysis, including proofs of monotonic improvement and convergence for the learned policies.
(3) Comprehensive empirical evaluation: The method is thoroughly evaluated on multiple maps in SMACv2, demonstrating consistent performance improvements over existing baselines.
(4) Ablation studies: The paper includes detailed ablation studies that validate the importance of dynamic decision priority determination.
(5) Practical considerations: The authors provide both a full communication version and a local communication version, addressing potential limitations in real-world applications.
Weaknesses: (1) Computational complexity: The paper lacks a detailed analysis of the computational overhead introduced by the multi-level communication scheme, especially for large-scale multi-agent systems.
(2) Sensitivity analysis: There is no discussion on the sensitivity of the method to hyperparameters, such as the number of sampling trajectories (F) or the length of predicted future trajectories (H).
(3) Potential for deadlocks: The paper lacks a thorough discussion on the possibility of deadlocks in the proposed asynchronous mechanism, which is a critical consideration for any asynchronous system.
Technical Quality: 3
Clarity: 3
Questions for Authors: (1) Could you elaborate on your choice of SMACv2 as the primary benchmark for evaluating SeqComm? Are there specific characteristics of SMACv2 that make it particularly suitable for demonstrating the advantages of your method? Additionally, do you believe the results from SMACv2 would generalize well to other MARL environments, and if so, why?
(2) Have you considered the possibility of deadlocks in your asynchronous mechanism? What measures, if any, have been implemented to prevent or resolve potential deadlocks?
(3) Can you provide an analysis of the computational complexity of SeqComm compared to existing methods, especially for large-scale multi-agent systems?
(4) How sensitive is SeqComm to the choice of hyperparameters, particularly F and H? Are there any guidelines for selecting these values?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors have made an effort to address some limitations of their work, which is commendable. They acknowledge that the assumption of access to other agents' local observations might not be realistic in all applications. This transparency is appreciated and aligns with the NeurIPS guidelines on discussing limitations.
However, there are several areas where the discussion of limitations could be expanded:
Scalability: The authors could further discuss how SeqComm's performance and computational requirements might change as the number of agents increases.
Generalizability: While SMACv2 is a valuable benchmark, the authors could address how well they expect their results to generalize to other MARL environments.
Potential for deadlocks: Given the asynchronous nature of the method, a discussion on the possibility of deadlocks and how they are prevented or resolved would be beneficial.
Hyperparameter sensitivity: The authors could elaborate on how sensitive their method is to the choice of hyperparameters, particularly F and H.
Regarding societal impact, the authors have not explicitly discussed potential negative consequences. While the immediate applications of this work may seem benign, it would be beneficial to consider and discuss potential misuse scenarios or unintended consequences of more efficient multi-agent coordination.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: To begin with, we thank the reviewer for the carefully reviewing and insightful advice.
>Could you elaborate on your choice of SMACv2 as the primary benchmark for evaluating SeqComm?
In cooperative MARL, SMAC is the most popular testbed for the centralized training and decentralized execution paradigm. Other important testbeds are Google Research Football (GRF) and Multiple Particle Environment (MPE).
Comparing the SMAC and other testbeds (MPE and GRF), SMACv2 (upgraded version of SMAC) is the latest testbed and proposed in 2023. This work critiques the original benchmark for lacking stochasticity and meaningful partial observability (SMAC, MPE, and GRF) and claims they simplify the coordination. It deliberately increases the stochasticity, meaning partial observability, and conducts thorough experiments to verify it. Our method aims to address the coordination issue in MARL and this problem is mostly induced by the partial observability of the agents. Therefore, our method is possible to demonstrate the superiority on the benchmark requiring high-level coordination, otherwise, the performance gap is not obvious for the benchmark where agents do not even need to coordinate to finish the tasks.
Compared with other benchmarks, SMACv2 is more challenging from the following points. 1. Complex observation dimension (hundreds). 2. Diversity (many maps and unit types with different functions). 3. Stochasticity (different designed start positions). 4. meaningful partial observability (mask many irrelevant information). We believe the results from SMACv2 would generalize well to other MARL environments suffering from coordination issues since we chose the most challenging benchmark in this field.
>Have you considered the possibility of deadlocks in your asynchronous mechanism? What measures, if any, have been implemented to prevent or resolve potential deadlocks?
Deadlock occurs when two or more agents obtain the same intention value at the same time, and then they need another rule to determine the priority. In our implementation, each agent is assigned an index, and the rule to break the deadlock is the agent with a small index to determine first.
> Can you provide an analysis of the computational complexity of SeqComm compared to existing methods, especially for large-scale multi-agent systems?
The computational complexity of SeqComm mainly related to communication overhead.
For full communication, SeqComm needs more rounds, but it only transmits observation information for one time. For the rest n − 1 round communication with total (n − 1)/2 broadcasts per agent, only a single intention value and an action will be exchanged. Considering there are n! permutations of different order choices for n agents, our method has greatly reduced computation overhead since each agent needs to calculate up to n times to search for a satisfying order.
For local communication, there are only two communication rounds. One is for sharing observations information. Another is for intention values.
>How sensitive is SeqComm to the choice of hyperparameters, particularly F and H? Are there any guidelines for selecting these values?
H is the length of the trajectory. Empirically, we found that 2-4 can lead to decent performance. For too long length, the error caused by the world model can be too large. Besides, since the order of each step can change, the observation after more steps is meaningless
F is the number of the future trajectories. Theoretically, the higher the number of F, the more accurate the estimation, but the computational cost also increases. Therefore, in practice, we need to seek a trade-off.
---
Rebuttal Comment 1.1:
Comment: Thank you for the authors' comprehensive rebuttal and additional results. The response has indeed improved my view of the paper. However, I still have some concerns regarding the Stackelberg Equilibrium (SE) as the foundational assumption for agent coordination in the proposed SeqComm scheme.
Specific Points for Further Consideration:
Rationale for SE in MARL: While the authors have addressed the application of SE in SeqComm, I would appreciate a more rigorous justification for choosing SE over other potential equilibrium concepts. It would be beneficial to explore how SE compares to alternative concepts that may be more naturally aligned with the decentralized and dynamic nature of MARL environments. A comparative analysis highlighting the specific advantages of SE in this context would strengthen the paper's theoretical foundation.
Assumptions of SE: The paper assumes that agents can achieve and maintain an SE, which may not always be realistic in scenarios where agents have limited or imperfect information about others' strategies. I encourage the authors to discuss the robustness of their approach when these assumptions are violated. Specifically: a) How does the SeqComm scheme perform under conditions of information asymmetry?
b) What mechanisms, if any, are in place to ensure the stability of the SE in dynamic MARL environments?
c) How does the approach handle scenarios where perfect information about other agents' strategies is not available?
---
Rebuttal 2:
Comment: This paper aims to demonstrate SE is better than NE. In nature, under SE where agent makes decisions sequentially, parts of agents can obtain the extra information (actions of upper-level agents). Intuitively, if the agent has more valuable information, it will help make the decision.
For other potential equilibrium, if it allows the agent to get extra valuable information and does not break the fundamental dynamic of the environment, it also can benefit the performance.
Assumption issues.
1. If information asymmetry which means communication is not allowed, then the SeqComm is degraded to MAPPO.
If only parts of the information are asymmetrical, it will degrade to the local-communication version.
2. In our view, we will recompute the decision-making order based on the current situation during the execution phase. It will ensure stability.
3. When perfect information about other agents' strategies is not available, we can do the opponent modeling, in more detail, we can train a policy that predicts the actions of other agents. Since we can get the observations of others via communication, the opponent modeling can be precise.
---
Rebuttal Comment 2.1:
Comment: Thank you for your comprehensive rebuttal. Your explanations have effectively addressed my concerns and substantially improved my assessment of the paper. As a result, I have decided to increase my score to 7.
This decision is based on several key strengths of your work:
Your SeqComm approach offers an innovative solution to multi-agent coordination through multi-level communication, firmly grounded in Stackelberg Equilibrium theory. The method's robust performance on SMACv2, coupled with a clear analysis of its communication efficiency, demonstrates its practical value in complex MARL scenarios.
I am confident that your contribution has the potential to make a significant impact on the MARL field.
---
Reply to Comment 2.1.1:
Comment: Thank you for your approval, we will revise the text in conjunction with the rebuttal. | Rebuttal 1:
Rebuttal: First of all, we are very grateful to the reviewers for their thorough review of our paper. We highly appreciate your valuable comments. We will emphasize the key points mentioned during the rebuttal period in the revised version.
Additionally, we have provided some extra experiments: Figure 1 includes the attention module vs. the aggregation method; Figure 2 shows some behavioral analysis; Figure 3 shows the comparison with additional baselines.
Pdf: /pdf/2a40c9b67c9354e648fbd2dc66a1f11a1ff39df8.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Fearless Stochasticity in Expectation Propagation | Accept (spotlight) | Summary: The authors introduce two methods for Expectation Propagation [EP] (which itself can be understood as a method for approximate Bayesian inference) that are robust to the stochastic noise introduced by using approximate expectations in the inner loop of EP. To do so the authors re-interpret the moment-matching update as a particular Natural Gradient Descent (NGD) step. While stochasticity remains due to the expectation, by moving to natural parameters MCMC estimates of the expectation do not suffer from bias-introducing non-linearities. By significantly reducing bias (to zero in the case of EP-$\eta$) the authors show that stable and computationally efficient EP is possible with single-sample moment-matching updates. In experiments the authors show that the method is easier to tune than alternatives and exhibits promising empirical performance.
Strengths: - To my mind the major strength of this submission is that it offers a well-reasoned technical improvement to a class of algorithms (EP) that are arguably underexplored in the literature (probably in large part due to the dominance of variational inference).
- Moreover, empirical results support the effectiveness of the proposed algorithms. Since the authors work hard to reduce dependence on tricky hyperparameter choices, the reader has reason to be confident in the basic validity of the presented empirical results.
- It is conceivable that the (novel, to the best of my knowledge) NGD interpretation offered by the authors could help motivate yet other EP variants.
Weaknesses: To my mind the major weakness of this submission is that the exposition is rather dense and at times hard to follow. This may be somewhat inherent in the technical nature of the topic, but I believe the authors could do a better job of guiding the reader. In particular a lot of space is devoted to somewhat extraneous modeling details ("...between diffuse galactic far ultraviolet radiation and 100-μm infrared emission in various sectors of the observable universe, using data obtained from the Galaxy Evolution Explorer..."), and this space in the main text could be re-purposed to offer more details on the algorithm, discuss the implications of the experimental results in more detail, or give more color on the "unified" EP presentation.
Technical Quality: 4
Clarity: 3
Questions for Authors: - math typos: line 804
- NIW is undefined where first introduced
- i guess in sec. 2.1 you should emphasize more that the domain of $\theta_i$ is $\Omega$, since $\theta_i \notin \Theta_i$ as one might expect? similarly i guess in practice given the form of the outer update $\theta_i = \theta_j$ for all $i, j > 0$? maybe this should be emphasized? can you please clarify?
- i understand that this is a paper about EP, but wouldn't it be valuable to benchmark against variational inference in at least one case? whether the authors like it or not, this remains a relevant comparison in my opinion.
Confidence: 3
Soundness: 4
Presentation: 3
Contribution: 3
Limitations: To my mind the major limitation is the lack of comparison to alternative methods for approximate inference like variational inference.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to thank the reviewer for taking the time to read our paper and for providing thoughtful feedback. We agree with the reviewer that the exposition can be dense at times; as the reviewer correctly mentioned, this was to some extent unavoidable due to the nature of the ideas being discussed, but there were several places that we would have liked to expand on points or offer additional explanation and were unable to do so due to space constraints. With the additional page we will look to add additional signposting throughout, including the following specific changes in line with the reviewers suggestions:
1. We will add sentences to add some intuition and summarise the idea behind EP-$\eta$ on line 158, before _“We call the resulting procedure…”_, and likewise for EP-$\mu$, on line 184, before _“We call this variant…”_.
2. We will expand on the summary of experimental results (lines 251-259) and discuss their implications. We will also add a brief summary of the results for each individual experiment at the end of their descriptions (lines 270, 279, 293 and 304). If space permits we will also look to move some of the hyperparameter sensitivity results from Appendix I into this section.
3. We will add a paragraph, beginning on line 79, to inform the reader that we are about to introduce a unified EP algorithm, and explain that the reason for doing so is primarily to show how several variants, including the new ones to be introduced later, are related to one another. We will also add an explanation of Algorithm 1, before line 86.
In response to the reviewer’s questions:
1. Yes, we will fix this.
2. Agreed, we will define it at first usage.
3. Yes, agreed and we will emphasise both points. It is indeed the case that after an outer update $\theta_i = \theta_j$ for all $i, j$. We could therefore obtain an equivalent procedure using just a single variable $\theta$ in the outer optimisation, but we kept the current presentation to be consistent with prior work. We believe this presentation may be a legacy of earlier works on double-loop EP where additional model structure was assumed, in which cases the distributions involved in the optimisation can be over different subsets of variables at each site.
4. The main focus of our work was to provide more effective methods for optimising the EP variational problem in the presence of Monte Carlo noise, with the case for using EP in such settings having been been made by prior work (Xu et al. 2014, Hasenclever et al. 2017, Vehtari et al. 2020). That being said, we agree it would be interesting to compare VI with the different EP variants on various performance metrics as a function of computation cost / time, to gain understanding of the tradeoffs involved. Due to time constraints we have not been able to run these experiments for the rebuttal, but we will do so and add the results to the final paper. More specifically, we will compare with natural gradient VI (Khan and Lin, 2017) as suggested by another reviewer.
As an aside, we note that Bui et al. (2017) studied the effect of the power parameter of power EP ($\beta_i$ in our notation), including the limiting case of VI ($\beta_i \rightarrow \infty$); they found that intermediate values of the power parameter were consistently better across tasks and performance metrics when compared to either the EP or VI extremes. As our methods apply to the general power EP objective, they too can be used to find such intermediate approximations.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their careful response. I maintain my score as is and hope the other reviewers will join me in recommending publication. | Summary: This work addresses the sensitivity of expectation propagation (EP) to the randomness of Monte Carlo (MC) estimates involved in its update steps. It tackles this issue by recasting the moment matching step in EP as natural gradient descent (NGD) in the mean space of an exponential family distribution. The author identifies that the instability of EP to MC noise is due to the nonlinearity of the mirror map defined by the log partition function of the exponential family. By cleverly moving the NGD from the mean space to the natural parameter space, this issue is bypassed.
The author also studies the influence of the stepsize on the accumulation of MC error, finding that decreasing the stepsize of the NGD updates helps to reduce the bias.
Strengths: - This is a well-written work. It starts with a succinct overview of the expectation propagation (EP) algorithm and summarizes several variants of EP updates in a unified manner (in Algorithm 1). This review also makes the contribution of this work very clear. It provides a sufficient discussion on why EP is sensitive to stochasticity and clearly motivates the design of the methodology. I also appreciate the self-contained review of the exponential family, natural gradient descent, and detailed derivation of EP updates provided in the appendix. Although these materials are standard, they help to make the work more accessible to a general audience.
- The feasibility of performing unbiased NGD updates in the natural space of $\tilde p_i$ (Prop1) is a smart observation, and this simple modification makes the inner loop significantly more robust to the stochasticity in MC est. This simple modification (from mean space to natural parameter space) makes the inner loop significantly more robust to the stochasticity in Monte Carlo estimation. I'm indeed impressed that EP can work with single-sample estimation in the inner loop.
- I also appreciate the rigorous investigation of the effect of $\alpha$ and $\epsilon$ to the bias of estimated mean parmaeter.
- The empirical performance of proposed methods are superior; it outperforms the standard EP by a significant margin (as shown in Fig2).
Weaknesses: The technical side of this work is very strong in my perspective, and I don't observe many weakness on this regard.
However, I think the novelty of this NGD perspective is overclaimed. To my knowledge, the relationship between moment matching in the exponential family and NGD is very well known (e.g., [1][2][3]). Once it is identified that $\tidle p_i$ is in the exponential family, the derivation of NGD (in either mean space Prop1 or natural parameter space) becomes quite straightforward. I hope the authors can elaborate on this point, and include discussion on some relavant literature arolund prop 1.
I believe the real contribution of this work lies in the rigorous understanding of the influence of the stepsize on the accumulation of Monte Carlo (MC) error and the identification of the nonlinearity of the mirror map that leads to this error accumulation, which is already impactful to me.
- [1]: Conjugate-Computation Variational Inference : Converting Variational Inference in Non-Conjugate Models to Inferences in Conjugate Models, 2018
- [2]: MCMC-driven learning, 2024
- [3]: Distributed Bayesian Learning with Stochastic Natural Gradient Expectation Propagation and the Posterior Server
Technical Quality: 3
Clarity: 3
Questions for Authors: - How do you estimate the KL divergence in the experiments? Maybe I'm missing something, but the target posterior is only known up to a constant, so unbiased KL est is not feasible?
- I'm curious to see the relative performance of this work to NGVI [1] (specified in the Weakness section).
Confidence: 2
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The limitation is well discussed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to thank the reviewer for taking the time to read our paper and for providing thoughtful feedback. With regards to the novelty of our NGD interpretation, we agree that the link between NGD and exponential family moment matching is known, and our intention is not to claim otherwise; we will change the wording around Proposition 1 to make this clearer, with reference to the relevant literature. We do believe however that the connection of this result with the updates of EP is novel; without the identification of an exponential family distribution $\tilde{p}_i$, parameterised by $\lambda_i$ such that the natural parameters are given by the specific affine function $\eta_i^{(t)}(\lambda_i)$, the connection with NGD does not hold. While this is perhaps a straightforward consequence of the link between NGD and moment matching, we do not believe this connection has been made before and it was a necessary insight for the contributions of the later sections. Indeed the only connection we are aware of having been made between the updates of (power) EP and NGD is in the limiting case (as $\beta_i \rightarrow \infty$) for which the updates of power EP and natural gradient variational inference (NGVI) coincide (Bui et al. 2018, Wilkinson et al. 2021).
In response to the questions:
1. The metric displayed in the plots shows the KL divergence from the current approximation ($p$) to (an estimate of) a converged EP solution. The converged solution was found by running EP for a large number of iterations, with the moments estimated using a very large number of samples; we discuss this on lines 247 and 923. We felt this was the most meaningful metric for our experiments, given that we were benchmarking the optimisation performance of different EP variants. Another choice we considered was to show the KL divergence between $p$ and a moment-matched Gaussian, for which the moments are obtained by running long MCMC chains. However, we found this metric to be problematic, as it often does not monotonically decrease during optimisation; for example, converged EP solutions tend to have lower variance than the true posterior (see, for example, Vehtari et al. 2020, Cunningham et al. 2011), and so we often saw a pattern of the KL decreasing as the approximate posterior variance shrinks towards the true posterior variance, and then rising as the approximation variance continues to decrease, before eventually settling at a higher KL than the earlier transient. In contrast, the KL divergence to a converged EP solution was typically monotonic decreasing for stable hyperparameter settings, and had the additional benefit of the attainable lower bound being (approximately) zero.
2. The main focus of our work was to provide more effective methods for optimising the EP variational problem in the presence of Monte Carlo noise, with the case for using EP in such settings having been been made by prior work (Xu et al. 2014, Hasenclever et al. 2017, Vehtari et al. 2020). That being said, we agree it would be interesting to compare NGVI with the different EP variants on various performance metrics as a function of computation cost / time, to gain understanding of the tradeoffs involved. Due to time constraints we have not been able to run these experiments for the rebuttal, but we will do so and add the results to the final paper. As an aside, we note that Bui et al. (2017) studied the effect of the power parameter of power EP ($\beta_i$ in our notation), including the limiting case of VI ($\beta_i \rightarrow \infty$); they found that intermediate values of the power parameter were consistently better across tasks and performance metrics when compared to either the EP or VI extremes. As our methods apply to the general power EP objective, they too can be used to find such intermediate approximations.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for the detailed reponse to address my questions. I'm happy to keep my score. | Summary: This paper considers new inference algorithms for EP. By framing the moment-matching equations of EP as a natural gradient update of a variational objective, they propose two new algorithms that are better suited for reducing/removing the bias introduced when sampling is required.
Strengths: I like the attempt at generalising and encapsulating the EP literature, and I find the `trick’ of mitigating the sampling inducing bias through a change of parameterisation is clever.
Weaknesses: 1) In general I find the paper quite hard to follow and of course this is not helped by the form of the standard EP equations. For example the main point of the paper is to handle the bias introduced by using sampling to estimate certain quantities within the EP equations. However the source of this sampling is only explicitly mentioned on line 132 in text. It would be much clearer if this was explicitly in Alg 1, and would also make it more convincing that Alg. 1 actually encapsulated many EP style algorithms.
2) Following above some terms are not defined until later on in the paper. For example the convex conjugate in eqn 4, and on line 100 only defined one page later on line 123.
3) The paper is very descriptive but does not explain results. For example, the experiments only describe the set up, but all results and any conclusions are pushed into the appendix. This makes it hard to actually assess the contributions.
Technical Quality: 3
Clarity: 2
Questions for Authors: 1) What is c_samp?
2) In Eqn 24 should there be a $beta_i$ scaling term?
3) In fig 1 a) why do most of the curves have a `u’ shape? Additionally why do the dashed orange/green curves not?
4) Does sequential update (instead of in parallel) affect the bias?
5) Why do you fix alpha to 1 and introduce a new step size eps? This looks like the only difference between Eqn 10 and the natural gradient part of 11.
6) The link between natural gradient steps and EP updates has been established previously (in Heskes and also in Hasenclever). Why is the view taken in paper a novel perspective compared to these previous works?
7) How does this work relate to `Bayes-Newton Methods for Approximate Bayesian Inference with PSD Guarantees’ Wilkinson et al, 2021 ? Which consider EP style algorithms with a natural gradient framework.
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to thank the reviewer for taking the time to read our paper and for providing thoughtful feedback. The reviewer’s concerns relate to the paper presentation. We will take specific steps to address these, detailed below. In light of these changes, would the reviewer be willing to reconsider their position on the paper?
1. We agree the presentation of EP updates is somewhat atypical. We felt it necessary to provide them in the context of the saddle-point problem, as this perspective is crucial for the later sections. Space constraints meant that providing both this and the conventional view in the main paper was difficult. We think this would be best addressed by referring to a short appendix after acknowledging the atypical presentation on line 69. The appendix would provide the conventional view – motivated as KL-divergence-minimising projections into the approximating family – and show that the resulting updates are equivalent to ours. We will also introduce the sampling source earlier by adding the following at the end of line 72: _“The expectation in (4) is most often computed analytically, or estimated using deterministic numerical methods. In principle it can also be estimated by sampling from $p_i$, however, we will later show that the resulting stochasticity can introduce bias into the procedure.”_. Note that this is also discussed starting on line 91. Regarding Algorithm 1, we will highlight the expectation in the update and add the comment _“stochastic estimation of this expectation can lead to biased updates”_.
2. $A^*(.)$ is introduced on line 38, but we will add the following text after that definition: _“$A(.)$ and $A^*(.)$ are convex conjugates of one another.”_. We will also add reminders throughout to help the reader.
3. As the results in Figure 2 were similar we summarised them jointly at the beginning of Section 4 (lines 251-259) before giving an overview of the individual experiments. We will expand on this and give a brief summary of the results for each experiment at the end of their descriptions. If space permits we will also move some of the hyperparameter sensitivity results from Appendix I into this section.
In response to the questions:
1. $c_\text{samp}$ is the cost of drawing a sample from one of the tilted distributions. It is defined at the beginning of Appendix G (line 884) and not used outside of that section.
2. Yes, we will fix this.
3. If the step size is too large the expected progress can decrease. This can simply be due to overshooting the minimum along the step direction. Alternatively, as a larger step size also results in higher variance, it can lead to some steps that are far worse than the expected step location, or even outside of the valid domain. If any steps in our sample went out of the valid domain we could not compute the average, and so the line can end abruptly (above a certain step size).
4. Bias results from other site parameters changing between updates, which happens in both sequential and parallel settings. We will explain this on line 140 as other readers are likely to have the same question.
5. $\alpha$ plays a similar role to a NGD step size in (10), but it also affects the distribution $\tilde{p}_i$ and hence the map from $\lambda_i$ to $\eta_i$. Note that (10) is an update direction in $\eta_i$ whereas (11) is an update for $\lambda_i$. If we did not fix $\alpha=1$, the resulting coefficient in (11) would be $\alpha^2$. To obtain a more faithful NGD interpretation we fix the distribution and vary the step size, which requires introducing $\epsilon$. We will elaborate on this on line 155.
6. We do not believe any prior work has made a direct connection between the updates of EP and NGD, except for the limiting case in which power EP coincides with VI (see answer to 7 below). Hasenclever et al. (2017) derived _new_ updates (different from the standard ones) which performed NGD of the variational objective with respect to a different distribution and parameterisation; in contrast, we show that the standard EP updates can already be viewed as performing NGD. We discuss this from line 210, but will amend the wording to make the distinction clearer. We believe the reviewer may be referring to the extended version of Heskes and Zoeter (2002) in which the authors propose two methods for finding saddle-points of the objective. One performs joint gradient ascent/descent in the parameters, and the other is what could be called “standard” double-loop EP. With respect to the first, the authors say it _“can be interpreted as a kind of natural gradient descent in γ and ascent in δ”_. However, the method referred to simply follows the standard gradient. We asked one of the authors about the meaning behind the statement through private correspondence, to which they replied _“I wouldn't dare to claim that there is a direct connection to Amari's natural gradient, perhaps more to his idea of information geometry and em algorithms.”_. In any case, the statement is about the gradient-based method and not the “standard” updates. We are not aware of any other work by Heskes suggesting a connection between EP and NGD.
7. Wilkinson et al. (2021) present a unifying framework encompassing several algorithms, viewing them as performing online Newton optimisations with localised updates. This framework elegantly illustrates connections between EP, VI and posterior linearisation. Their view is complementary to ours, and does not make any claims about a connection between EP and NGD in the general case. The authors do however show that the updates of power EP coincide with those of natural gradient VI when the variational limit of the power parameter is taken ($\beta_i \rightarrow \infty$, in our notation); this point was also made by Bui et al. (2018). Our connection is more general, and applies for any value of $\beta_i$. This result is clearly relevant however, and we will reference it after Proposition 1 on line 127.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response. I think further clarifying and being more specific with the distinction with SNEP (Hasenclever et al. [13]) would be beneficial to the paper. For example on line 219 you state that SNEP can be viewed as doing natural gradient descent 'but with distributions that are more closely matched with those being optimised' however it is not clear to me what 'distributions that are more closely matched with those being optimised'.
In light of your response and the other reviews, I will increase my score by one point. | null | null | Rebuttal 1:
Rebuttal: We would like to thank all of the reviewers for taking the time to review our paper. All reviewers offered valuable feedback, which we have taken on board. Two reviewers highlighted weaknesses related to the paper presentation; we have taken specific steps to address the points raised, which are detailed in the rebuttals below. One reviewer questioned the novelty of our NGD interpretation of EP; we believe we have addressed this below, and will add wording to the paper to make the extent of our contribution clearer. Two reviewers also asked for variational inference to be included as an additional baseline, which we have committed to doing for the camera-ready paper. | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Cluster-wise Graph Transformer with Dual-granularity Kernelized Attention | Accept (spotlight) | Summary: This paper proposed a novel attention method for graphs, focusing on different resolutions of nodes. Instead of attention calculation on the coarsened graph, it views the clustered nodes as sets, and calculates the attention between cluster and node level. Furthermore, it leverages the kernel method for more efficient attention calculation.
Strengths: - Overall, the writing is good and clear.
- The methodology is clear, and the illustration is great.
- The experiment results seem strong.
Weaknesses: - The only novelty is the dual granularity/resolution/hierarchy attention. The kernelization and the multi-kernel are already investigated.
- The graph datasets in the experiments are small. Since the attention can be kernelized and is more efficient, there is no reason not to aim for larger graphs.
Technical Quality: 3
Clarity: 4
Questions for Authors: - How do you pick the number of clusters for metis algorithm?
- What properties does the assignment matrix C hold? For example, do rows sum up to 1?
- Intuitively I don't understand why the number of node queries is the same as the number of clusters, not number of nodes. In the illustration, why 3 qs not 9?
Confidence: 4
Soundness: 3
Presentation: 4
Contribution: 2
Limitations: As mentioned by the authors, the method relies on metis algorithm, which is not flexible enough. Ideally it should be compatible with any graph partitioning algorithms.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the valuable questions. We provide the following detailed responses to your major concerns.
> **Q1. The only novelty is the dual granularity/resolution/hierarchy attention. The kernelization and the multi-kernel are already investigated.**
We acknowledge the reviewer's comment regarding the familiarity of kernelization and multi-kernel methods. However, we would like to further elucidate the novelty of our approach:
1. **Motivation**: In the context where most node clustering-based methods utilize a graph coarsening pipeline—which has its limitations (as illustrated in Fig. 1)—we propose a novel approach for propagating messages between clusters without compressing them. This method allows for deeper interactions between cluster-level and node-level information (as shown in Sec. 3.3).
2. **Application of Multi-Kernel Learning (MKL)**: While multi-kernel learning is a mature mathematical tool, we are the first to apply the MKL approach to integrate node-level and cluster-level information.
3. **Implementation**: While kernelization is commonly used to accelerate attention computations, our approach integrates kernelization with a message-passing framework. accelerating the attention computation by propagating the keys and values among the clusters (as illustrated in Fig. 2).
In summary, while each mathematical tool has been extensively studied before, our application of these mathematical tools is novel and addresses a previously suboptimal aspect of the graph pooling process.
> **Q2. The graph datasets in the experiments are small. Since the attention can be kernelized and is more efficient, there is no reason not to aim for larger graphs.**
In this paper, our focus is on optimizing clustering-based graph pooling for graph-level tasks. Unlike node-level tasks, graph-level tasks naturally involve graphs of relatively smaller sizes. However, in terms of the number of graphs, the datasets we selected are not "small-scale." (For instance, the OGB MolHIV dataset we used contains 41,127 graphs, which is a considerable size.)
We chose these datasets because they align with the datasets used in the two most relevant baseline works [1,2] that we compare against, ensuring a fair and consistent evaluation of our method.
> **Q3. How do you pick the number of clusters for metis algorithm?**
In our experiments, we set the number of clusters for metis as a hyperparameter from $\\{4, 8, 16, 32\\}$. While this approach of pre-selecting the number of clusters may seem inflexible (as noted in the limitation section), many works in graph pooling (e.g. [1,2,3]) also preset the cluster numbers.
Meanwhile, there are methodologies capable of dynamically adjusting the number of clusters through learning (e.g. [8]). As mentioned in the limitation section, integrating these methods with our N2C-Attn model represents a promising direction for future work.
> **Q4. What properties does the assignment matrix C hold? For example, do rows sum up to 1?**
We clarify that the concept of cluster assignment matrix $\boldsymbol{C}$ is not originally defined in this paper but is instead adopted from reference [4]. It is common for the rows of $\boldsymbol{C}$ to sum up to 1, which can be ensured through a row-wise softmax operation, e.g. [3, 5, 6]. However, configuration where the rows of $\boldsymbol{C}$ do not sum to 1 is also possible, e.g. [7].
Regarding the properties of $\boldsymbol{C}$, there are two main points: Firstly, its shape corresponds to [#Num of nodes, #Num of clusters]. Secondly, the element $\boldsymbol{C}_{ij}$ represents the weight of node $i$ in cluster $j$. These characteristics allow the computation involving $C$, the feature matrix $\boldsymbol{X}$, and the adjacency matrix $\boldsymbol{A}$ to simulate the process of graph coarsening (i.e. $\boldsymbol{X}^{P} = \boldsymbol{C}^T \boldsymbol{X} ;\quad \boldsymbol{A}^{P} = \boldsymbol{C}^T \boldsymbol{A} \boldsymbol{C}$).
> **Q5. Intuitively I don't understand why the number of node queries is the same as the number of clusters, not number of nodes. In the illustration, why 3 qs not 9?**
We understand the reviewer's concern and offer further clarification here. In the N2C-Attn framework, each node is responsible for providing a pair of keys, and similarly, each cluster supplies a pair of queries (line 121, 128). Thus, a natural setup is to have **the number of keys align with the number of nodes** and **the number of queries align with the number of clusters**.
However, N2C-Attn considers both cluster-level and node-level information. For nodes within the same cluster, their cluster-level information should be identical (line 119). Therefore, the number of cluster-level keys should be equal to the number of clusters. Thus we have:
| Type | # Num |
|-|-|
| node-level key| Number of nodes
| cluster-level key| Number of clusters
| node-level query | Number of clusters
| cluster-level query | Number of clusters
It's important to note that **node-level** or **cluster-level** specifically indicate the belonging to different feature spaces, $\mathcal{X_N}$ and $\mathcal{X_C}$, respectively. **Node-level queries** are provided by each **cluster**, not by individual nodes. We hope this clears up any confusion for the reviewer.
[1]. He et al. A Generalization of ViT/MLP-Mixer to Graphs. ICML 2023.
[2]. Wu et al. Structural entropy guided graph hierarchical pooling. ICML 2022.
[3]. Bianchi et al. Spectral clustering with graph neural
networks for graph pooling. ICML 2020.
[4]. Liu et al. Graph Pooling for Graph Neural Networks: Progress, Challenges, and Opportunities. IJCAI 2023.
[5]. Ying et al. Hierarchical Graph Representation Learning with Differentiable Pooling. NeurIPS 2018.
[6]. Khasahmadi et al. Memory-based graph networks. ICLR 2020.
[7]. Bacciu et al. A Non-Negative Factorization approach to node pooling in Graph Convolutional Neural Networks. AIIA 2019.
[8]. Song et al. Graph Parsing Networks. ICLR 2024.
---
Rebuttal Comment 1.1:
Comment: Thank you for your effort in the rebuttal. After reading your response to the reviewers, I would like to raise my score to 6.
---
Reply to Comment 1.1.1:
Title: Many thanks
Comment: Many thanks for the positive feedback. We are dedicated to continual improvement. We also wish to thank the reviewer for improving the quality of our paper. | Summary: This paper considers graph transformers. In previous methods that consider cluster info, node clusters are pooled, which may lose node level information. This paper proposes node-to-cluster attention, where the nodes in the clusters are not compressed, and each cluster can interact with every node in other clusters. It proposed efficient formulation of node-to-cluster attention, and incorporated into the attention mechanism of a graph transformer. A comparison of the proposed method with existing benchmarks, both GCN and transformers, were conducted over 8 datasets, and the proposed method is shown to outperform.
Strengths: - Even though it is a complicated setup, the paper is well written and illustrated, and somewhat managed to get the setup across.
- Experiment is comprehensive, with both GCN and graph transformer benchmarks, study of the necessity of combining cluster and node level info, and efficiency study. The performance seems good
Weaknesses: - Even though it may be unavoidable, the amount of notations make reading the paper somewhat difficult
- Instead of using metis which is a partitioning algorithm, why not use a graph clustering algorithm? Is it because you like each partition to have an equal number of nodes?
Technical Quality: 3
Clarity: 3
Questions for Authors: - In addition to the node-to-cluster attention, should there also be cluster-to-node attention and cluster to cluster attention?
- Figure 3: typo? “positinal”->”positional”
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: discussion in Appendix H.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We are grateful for the reviewer's comments and suggestions. Below, we provide detailed responses to address the main concerns.
> **Q1. Even though it may be unavoidable, the amount of notations make reading the paper somewhat difficult.**
We fully understand the reviewer's concern about the number of notations, and we acknowledge that it might make the reading experience challenging. But given the complexity of the algorithm setup, these notations are essential for conveying the technical details accurately.
The idea of our approach, however, is straightforward: we integrate multiple kernel learning within the kernelized attention framework, which facilitates effective information transfer at both cluster and node levels among clusters without resorting to the graph coarsening pipeline.
To enhance readability, we have included visual representations of the methodology in Figures 1, 2, and 3 in the paper. Additionally, we provide here a list of notation and the corresponding description.
| Notation | Description |
|-|-|
| $(\mathcal{N}, \mathcal{E}, \mathbf{X}, \mathbf{A})$ | Multi-tuple representing the graph: nodes ($\mathcal{N}$), edges ($\mathcal{E}$), node features ($\mathbf{X}$), adjacency matrix ($\mathbf{A}$). |
| $(\mathcal{N}^P, \mathcal{E}^P, \mathbf{X}^P, \mathbf{A}^P)$ | Cluster-level (coarsened) graph components.
| $\mathbf{C}$ | Cluster Assignment Matrix, used to map nodes to clusters, obtained from graph partitioning methods. |
| $\mathcal{X}_C, \mathcal{X}_N$| Feature space for cluster-level and node-level attributes. |
|$k_t$| Node-level key, derived from the $t$-th node's embedding: $k\_t = \mathbf{W}\_k h\_t$, where $h_t$ is the feature of the $t$-th node. |
| $K_j$| Cluster-level key, representing collective features of the $j$-th cluster: $K\_j = \mathbf{W}'\_k (\sum\_{s}\mathbf{C}\_{sj}h\_{s})$. |
|$q_i$| Node-level query for the $i$-th cluster interacting with the node-level key: $q\_i = \mathbf{W}\_q (\sum\_{s} \mathbf{C}\_{si} h\_{s})$. |
|$Q_i$| Cluster-level query for the $i$-th cluster interacting with the cluster-level key: $Q\_i = \mathbf{W}'\_q (\sum\_{s} \mathbf{C}\_{si} h\_{s})$. |
|$\kappa_B$| Bi-level kernel used in both N2C-Attn-T and N2C-Attn-L. Combines node and cluster level kernels. |
|$\kappa_C$| Cluster-level kernel function comparing cluster-level queries and keys. |
|$\kappa_N$| Node-level kernel function comparing node-level queries and keys. |
|$\alpha, \beta$| Learnable parameters in N2C-Attn-L, weighting the influence of cluster-level and node-level kernels, respectively. |
|$\Phi_\mathrm{B}$| Feature map corresponding to the bi-level kernel $\kappa_B$|
|$\phi, \psi$| Feature maps corresponding to kernel functions for cluster-level and node-level, respectively. |
|$v_t$| Node-level values used in the attention computation. |
|$\langle \cdot, \cdot \rangle$| Inner product|
|$\otimes, \oplus$| Operators for the outer product and concatenation|
> **Q2. Instead of using metis which is a partitioning algorithm, why not use a graph clustering algorithm? Is it because you like each partition to have an equal number of nodes?**
Our choice of Metis over other graph clustering algorithms was not due to a specific preference (e.g. each partition to have an equal number of nodes); indeed, any method that generates a cluster assignment matrix could substitute Metis. We acknowledge this interchangeability in line 229 of our paper and in the limitation section.
The main reasons for using Metis in our work are:
1. We selected Graph-ViT [1] as a primary baseline, which employs Metis for subgraph partitioning. To ensure a fair comparison, we followed this setup.
2. Metis is a well-established graph partitioning algorithm [2], recognized for its efficient implementations and applications in graph learning (e.g. [3]).
3. Unlike other works that focus on optimizing partitions, our study concentrates on the post-partition phase. Therefore, we preferred a straightforward and common partitioning approach to highlight our advancements in post-partition optimization.
> **Q3. In addition to the node-to-cluster attention, should there also be cluster-to-node attention and cluster to cluster attention?**
This is an intriguing question. We address it in two parts:
- **Cluster-to-cluster attention** already exists. In our paper, we refer to techniques that consider only the aggregate information of clusters as cluster-to-cluster attention, such as GraphViT. A more detailed explanation is provided in Appendix B.
- Regarding **cluster-to-node attention**, this would involve each node aggregating information at the cluster level during its representation update. The viability of this approach depends on whether the information from the cluster is useful for node-level tasks. Given that node-level tasks often require finer-grained information than what clusters provide, this assumption may not always hold, warranting further experiments and analysis.
> **Q4. Figure 3: typo? “positinal”->”positional”**
Thank you for pointing out this typo. We will make the necessary corrections.
[1]. He et al. A Generalization of ViT/MLP-Mixer to Graphs. ICML 2023.
[2]. Karypis et al. A fast and high quality multilevel scheme for partitioning irregular graphs. SIAM 1998.
[3]. Chiang et al. Cluster-GCN: An Efficient Algorithm for Training Deep and Large Graph Convolutional Networks. KDD 2019.
---
Rebuttal Comment 1.1:
Comment: Acknowledge the response and thanks the careful reply.
---
Reply to Comment 1.1.1:
Title: Many thanks
Comment: Thank you for your feedback and assistance in improving the quality of our paper. We are dedicated to refining our work further and implementing necessary improvements. | Summary: The paper introduces the Node-to-Cluster Attention (N2C-Attn) mechanism, which captures both node and cluster-level information using multiple kernels. N2C-Attn can be implemented in the form of linear-time complexity by a cluster-wise message-passing framework. Based on N2C-Attn, the authors propose a Cluster-wise Graph Transformer (Cluster-GT), and demonstrate it outperforms baselines in graph-level benchmarks.
Strengths: This paper is easy to read, with a well-motivated and appropriately designed method to address the identified problem. Using kernelized attention effectively resolves the complexity issue associated with bi-level attention. The theoretical claim aligns well with the model's motivation and behavior. Extensive and diverse experiments demonstrate the model's superior performance and efficiency.
Weaknesses: - The paper emphasizes "without resorting to the graph coarsening pipeline," suggesting that the proposed method eliminates graph coarsening in their model pipeline. However, it actually employs a graph coarsening method (i.e., METIS). While it is true that the proposed method maintains clusters uncompressed, unlike traditional methods that typically coarsen each cluster into a single embedding, this statement might be misleading to readers. The authors should revise this expression to prevent any potential misunderstanding and avoid overselling the method.
- I suggest authors discuss the existing research on GNNs with graph coarsening to capture broader structure information (e.g., higher-order structures or long-range dependencies). Representative models are listed below:
1. Fey, M., Yuen, J. G., & Weichert, F. (2020). Hierarchical inter-message passing for learning on molecular graphs. arXiv preprint arXiv:2006.12179.
1. Zhang, Z., Liu, Q., Hu, Q., & Lee, C. K. (2022). Hierarchical graph transformer with adaptive node sampling. Advances in Neural Information Processing Systems, 35, 21171-21183.
1. Liu, C., Zhan, Y., Ma, X., Ding, L., Tao, D., Wu, J., & Hu, W. (2023). Gapformer: Graph Transformer with Graph Pooling for Node Classification. In IJCAI (pp. 2196-2205).
1. Fu, D., Hua, Z., Xie, Y., Fang, J., Zhang, S., Sancak, K., ... & Long, B. (2024) VCR-Graphormer: A Mini-batch Graph Transformer via Virtual Connections. In The Twelfth International Conference on Learning Representations.
1. Kim, D., & Oh, A. (2024) Translating Subgraphs to Nodes Makes Simple GNNs Strong and Efficient for Subgraph Representation Learning. In Forty-first International Conference on Machine Learning.
Technical Quality: 4
Clarity: 4
Questions for Authors: - Can the authors rename RWSE to RWPE, following the original work?
Confidence: 4
Soundness: 4
Presentation: 4
Contribution: 3
Limitations: The authors adequately addressed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate the reviewer's comments and suggestions. Here are our detailed responses to your main concerns.
> **Q1. The paper emphasizes "without resorting to the graph coarsening pipeline," suggesting that the proposed method eliminates graph coarsening in their model pipeline. However, it actually employs a graph coarsening method (i.e., METIS). While it is true that the proposed method maintains clusters uncompressed, unlike traditional methods that typically coarsen each cluster into a single embedding, this statement might be misleading to readers. The authors should revise this expression to prevent any potential misunderstanding and avoid overselling the method.**
We thank the reviewer for pointing out this issue. Indeed, the METIS algorithm includes the coarsening (and uncoarsening) process. Thus, strictly speaking, there is a coarsening operation during the partition phase in our implemented cluster-GT.
The point we wish to emphasize is that our work primarily focuses on the post-partition phase (line 227), while the methods used in the partition phase are not our central focus. We can replace Metis with other methods that do not involve coarsening. The proposed approach itself (i.e. N2C-Attn) achieves the objective of "without resorting to the graph coarsening pipeline".
We thank for the reviewer's suggestion. We will revise our manuscript to clarify this point and avoid any potential overstatement.
> **Q2. I suggest authors discuss the existing research on GNNs with graph coarsening to capture broader structure information (e.g., higher-order structures or long-range dependencies). Representative models are listed below:**
>
>[5]. Fey et al. Hierarchical inter-message passing for learning on molecular graphs. arXiv:2006.12179.
>
>[6]. Zhang et al. Hierarchical graph transformer with adaptive node sampling. NeurIPS 2022.
>
>[7]. Liu et al. Gapformer: Graph Transformer with Graph Pooling for Node Classification. IJCAI 2023.
>
>[8]. Fu et al. VCR-Graphormer: A Mini-batch Graph Transformer via Virtual Connections. ICLR 2024.
>
>[9]. Kim et al. Translating Subgraphs to Nodes Makes Simple GNNs Strong and Efficient for Subgraph Representation Learning. ICML 2024.
Thank you for your suggestion and the provided references. Here, we briefly discuss the literature you mentioned and will include a more comprehensive discussion on related work in our paper.
[5] utilizes a dual-graph structure, employing a hierarchical message passing strategy between a molecular graph and its junction tree to facilitate a bidirectional flow of information. This concept of interaction between the coarsened graph (clusters) and the original graph (nodes) is similar to our N2C-Attn. However, the difference lies in [5]'s approach to propagating messages between clusters and nodes, whereas N2C-Attn integrates cluster and node information directly in the attention calculation using a multiple-kernel method.
[6] introduces a novel node sampling strategy as an adversarial bandit problem and implements a hierarchical attention mechanism with graph coarsening to address long-range dependencies efficiently. [7] uses graph pooling to coarsen nodes into fewer representatives, focusing attention on these pooled nodes to manage scalability and computational efficiency. Nonetheless, [6,7] still follow a graph coarsening pipeline, i.e., computing attention on the pooled graph.
[8] focuses on challenges in mini-batch training, proposing to rewire graphs by introducing multiple types of virtual connections through structure- and content-based super nodes. This approach differs from our study, which deals with information propagation between different clusters in the whole graph. [9] introduces the SubgraphTo-Node (S2N) translation method, coarsening subgraphs into nodes to improve subgraph representation learning. While innovative for subgraph classification, it follows the graph coarsening pipeline and does not align directly with the broader graph-level tasks targeted in our research.
> **Q3. Can the authors rename RWSE to RWPE, following the original work?**
We appreciate the reviewer's attention to this interesting detail. Here, we provide a brief investigation into the naming of RWPE.
In the original paper that introduced RWPE [1], the term "Random Walk Positional Encoding (RWPE)" was proposed. This paper utilized the self-landing probability of nodes in a random walk to capture neighborhood structural information.
Subsequently, an influential work in the graph transformer domain [2] made a clear distinction between two types of encodings for structure and position, naming them Positional Encoding (PE) and Structural Encoding (SE). Positional encodings are intended to provide an understanding of a node's position within the graph, while Structural encodings aim to embed the structure of graphs or subgraphs, enhancing the expressivity and generalizability of GNNs.
Interestingly, [2] argues that the Random Walk Positional Encoding (RWPE) proposed in [1] actually serves as a Structural Encoding (SE). Based on our investigation, it is likely that **[2] began using the term RWSE instead of RWPE**. Many subsequent studies, (likely influenced by [2],) such as [3, 4], have also adopted RWSE over RWPE. In our work, we also use RWSE, the widely accepted term.
In conclusion, both RWSE and RWPE are widely recognized and used interchangeably in the academic community to refer to the same encoding method (Diagonal of the $m$-steps random-walk matrix). We will include this brief investigation on the evolution of the term RWPE(RWSE) in our paper.
[1]. Dwivedi et al. Graph Neural Networks with Learnable Structural and Positional Representations. ICLR 2022.
[2]. Rampášek et al. Recipe for a General, Powerful, Scalable Graph Transformer. NeurIPS 2023.
[3]. Shirzad et al. Exphormer: Sparse Transformers for Graphs. ICML 2023.
[4]. He et al. A Generalization of ViT/MLP-Mixer to Graphs. ICML 2023.
---
Rebuttal 2:
Comment: Thank you for your detailed comments. I will support this paper's acceptance (changed score from 6 to 7).
---
Rebuttal Comment 2.1:
Title: Many thanks
Comment: Many thanks for your positive feedback. We are committed to further refining our work and making the necessary improvements to address any concerns. Thank you for the opportunity to enhance the quality of our research. | Summary: The paper proposes an attention-based methodology for supervised graph classification and regression. It adopts a pipeline similar to GraphViT, involving graph partition, cluster-wise representation learning, and aggregation. However, the core mechanism for learning cluster-wise representations is novel. Specifically, it introduces an inter-cluster attention and a cluster-to-node attention, using both attention maps to formulate the attention weights that aggregate node features into the query cluster's representation. This method is evaluated on eight graph classification/regression benchmarks, demonstrating promising results compared to the listed baselines.
Strengths: 1. The paper presents a clear and well-supported motivation for the proposed methodology, with technical details that are thoroughly demonstrated.
2. The methodology is novel, and considering both the features of each node and the cluster it is affiliated with when calculating the attention weight is highly reasonable.
3. Experimental results demonstrate the effectiveness of the method. Additionally, the exploratory study on combination weights validates the significance of introducing the learnable bias of the affiliated cluster into the attention weights.
Weaknesses: 1. The comparison with graph pooling methods misses MVPooL [1], a recent baseline that has achieved higher accuracy on these benchmarks.
2. Typo: the expression of the attention score between the $i$-th cluster and the $t$-th node in the $j$-th cluster should not contain the value $v_t$.
3. Suggestion: in Figure 2, combining Step 1 with Step 4 might provide a clearer illustration. Only with the presence of queries, keys, and values can the result of an attention operation be in the form of an aggregated representation; the result involving only keys and values is not well-defined. Additionally, combining Step 1 with Step 4 would better match the computation order of Equation 11.
**Reference**
[1] Zhang, Zhen, et al. "Hierarchical multi-view graph pooling with structure learning." IEEE Transactions on Knowledge and Data Engineering 35.1 (2021): 545-559.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Could the authors provide a heuristic explanation for why the cluster-level attention outperforms node-level attention on many benchmarks (as shown in Figure 5)? Cluster-level attention assigns the same weight to all nodes within a cluster, while node-level attention allows more flexible integration between cluster and node representations. It is not immediately clear why a more rigid approach would outperform a more flexible one.
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely appreciate the reviewer's feedback. We provide details to clarify the reviewer's major concerns.
> **Q1. The comparison with graph pooling methods misses MVPooL [1], a recent baseline that has achieved higher accuracy on these benchmarks.**
We appreciate the reminder from the reviewer. MVPooL [1] is a compelling method in the domain of graph pooling. Although the datasets involved in the experiments of [1] partially overlap with those used in our work, we did not reference the results from MVPooL in our paper due to differences in experimental settings.
We conduct a comparison of MVPooL under the same experimental setup as described in Section 5.1. (due to time limits, we did not conduct a hyperparameter search and instead used the default hyperparameters provided in the code from [1]). ClusterGT achieved relatively superior results on 5 out of the 6 datasets.
| Model|IMDB-BINARY|IMDB-MULTI|COLLAB|MUTAG|PROTEINS|D&D|
|-|-|-|-|-|-|-|
| MVPool | 72.87±0.69 | 51.04±0.79 | 80.88±0.34 | 82.73±1.21 | 75.15±0.70 | 77.32±0.49 |
| Cluster-GT | 75.10±0.84 | 52.13±0.78 | 80.43±0.52 | 87.11±1.37 | 76.48±0.86 | 79.15±0.63 |
> **Q2. Typo: the expression of the attention score between the $i$-th cluster and the $t$-th node in the $j$-th cluster should not contain the value.**
Thank you for pointing out this typo. Indeed, the attention score between the $i$-th cluster and the $t$-th node in the $j$-th cluster should be:$
\frac{\mathbf{A}\_{i, j}^P \mathbf{C}\_{tj} \kappa\_{\mathrm{B}}(\\{Q\_i, q\_i\\},\\{K\_j, k\_t\\})}{\sum\_j \mathbf{A}\_{i, j}^P \sum\_t \mathbf{C}\_{tj} \kappa\_{\mathrm{B}}(\\{Q\_i, q\_i\\},\\{K\_j, k\_t\\})}
$ . We will make the necessary corrections.
> **Q3. Suggestion: in Figure 2, combining Step 1 with Step 4 might provide a clearer illustration. Only with the presence of queries, keys, and values can the result of an attention operation be in the form of an aggregated representation; the result involving only keys and values is not well-defined. Additionally, combining Step 1 with Step 4 would better match the computation order of Equation 11.**
We thank for the reviewer's suggestion. However, there might be a slight misunderstanding here. In fact, the result involving only keys and values **is well-defined** (i.e $\sum_t\psi(k_t) v_t$ in the paper, where $\psi(k_t)\in \mathbb{R}^{d_k\times1}$ and $v_t\in\mathbb{R}^{1\times d_v}$). The kernelized softmax trick aggregates keys and values first, and then computes them with queries to reduce computational complexity (e.g. [2,3,4]).
Therefore, Step 1 and Step 4 can be separated. During our implementation, we follow the sequence of Step 1, 2, 3, and 4 as illustrated in Fig. 2, which aligns with the computational process introduced in Section 3.2 and Equation 14.
> **Q4. Could the authors provide a heuristic explanation for why the cluster-level attention outperforms node-level attention on many benchmarks (as shown in Figure 5)? Cluster-level attention assigns the same weight to all nodes within a cluster, while node-level attention allows more flexible integration between cluster and node representations. It is not immediately clear why a more rigid approach would outperform a more flexible one.**
We appreciate the reviewer for highlighting this interesting point.
Firstly, we would like to clarify that: cluster-level attention does not always outperform node-level attention. In fact, as shown in Figure 5, among the four datasets analyzed, cluster-level attention performs better only on IMDB-Binary and IMDB-Multi, whereas node-level attention excels on PROTEINS and D&D. In summary, both cluster-level and node-level attentions have their merits: cluster-level attention is more suitable for social network datasets, while node-level attention fits better with bioinformatics datasets. This may be attributed to the structured nature in social network datasets like IMDB where cluster-level patterns are more pronounced.
Moreover, although node-level attention is indeed more flexible than cluster-level attention, this does not necessarily mean that it outperforms the latter. Cluster-level attention effectively utilizes the auxiliary information of node cluster assignment. Despite being coarser in granularity compared to node-level information, this cluster-level information can be helpful and less noisy, particularly for the graph-level tasks we study in this paper.
Lastly, this issue can also be analyzed from the perspective of the impact of introducing additional constraints on model regularization: while cluster-level attention is not as flexible as node-level attention, the additional constraint within cluster-level attention might help prevent overfitting, enhancing its robustness and generalization capabilities.
[1] Zhang et al. Hierarchical multi-view graph pooling with structure learning. IEEE TKDE 2021.
[2]. Katharopoulos et al. Transformers are rnns: Fast autoregressive transformers with linear attention. ICML 2020.
[3]. Wu et al. NodeFormer: A Scalable Graph Structure Learning Transformer for Node Classification. NeurIPS 2022.
[4]. Huang et al. Tailoring Self-Attention for Graph via Rooted Subtrees. NeurIPS 2023.
---
Rebuttal Comment 1.1:
Comment: Thank you for the detailed reply. My concerns are addressed and I will remain my support for the acceptance of the paper, with the score adjusted upward for one point. Best of luck.
---
Reply to Comment 1.1.1:
Title: Many thanks
Comment: Many thanks for the positive feedback. We remain dedicated to ongoing improvement. And we thank the reviewer for helping us improve the quality of our paper. | Rebuttal 1:
Rebuttal: We extend our sincere gratitude to the reviewers for their insightful feedback.
We are delighted to see the comments regarding the paper's **presentation**: "The paper is well written and illustrated" (Reviewer 2taa), its **motivation**: "The paper presents a clear and well-supported motivation for the proposed methodology" (Reviewer azHs), its **theoretical analysis**: "The theoretical claim aligns well with the model's motivation and behavior." (Reviewer Tre7), and its **experimental setup**: "Extensive and diverse experiments demonstrate the model's superior performance and efficiency." (Reviewer Tre7).
We would like to reiterate the main contributions of our work here:
1. **Motivation:** Current clustering-based graph pooling methods primarily follow a graph coarsening pipeline, which has its limitations (as illustrated in Fig. 1). We propose a method that facilitates information propagation between clusters without compressing them. This approach allows for a deep interaction between cluster-level and node-level information (as shown in Sec. 3.3).
2. **Technical Contribution:** We enhance the kernelized attention framework by integrating Multi-Kernel Learning (MKL), allowing for a more nuanced merging of information at both cluster-level and node-level granularities. Additionally, leveraging kernelization techniques, we implement an efficient attention computation method by propagating the aggregated keys and values among clusters.
In the following individual responses, we have addressed the main concerns raised by the reviewers. We are grateful for the reviewers' detailed suggestions. If there are any further questions or comments you wish to discuss, please do not hesitate to reach out. | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Boosting Text-to-Video Generative Model with MLLMs Feedback | Accept (poster) | Summary: Recent text-to-video models like Sora show potential but suffer from poor video quality and misalignment with text prompts due to variable dataset quality. This study addresses these issues by leveraging Reinforcement Learning from Human Feedback (RLHF) to align outputs with human preferences. Due to the high costs of manual annotation, Multimodal Large Language Models (MLLMs) were used for video preference annotations, demonstrating strong concordance with human judgments. This led to the creation of VideoPrefer, containing 135,000 annotations, and the development of VideoRM, a reward model for video preferences. Experiments confirm the effectiveness of both VideoPrefer and VideoRM.
Strengths: Strength:
1. The research focuses on utilizing multimodal large models for dataset annotation, which is a highly promising direction with significant potential for real-world applications. This innovative approach leverages the strengths of multimodal data to enhance the accuracy and efficiency of dataset labeling, offering substantial advancements in the field.
2. The paper introduces a novel reward model in the field of video preference, which effectively evaluates video quality. This approach presents a significant advancement, as it offers a robust method for assessing video content, potentially leading to improved recommendations and enhanced user experience.
3. The quantitative analysis effectively demonstrates the text-to-video selection capability of the proposed VIDEORM model, highlighting its strong semantic alignment abilities. This thorough analysis underscores the model's proficiency in aligning textual inputs with relevant video content, showcasing its potential impact in the field.
Weaknesses: Weakness:
1. Although existing research has demonstrated that GPT-4 can be used for data annotation, it is essential to perform a sample check of the annotated data to assess its quality. Relying solely on the literature to support the use of GPT-4 for annotation without conducting thorough spot checks and corrections undermines the reliability of the annotated content. Ensuring the accuracy and quality of annotations through systematic verification is crucial for maintaining the integrity of the dataset.
2. Furthermore, the evaluation of generated videos should consider perspectives from multiple roles. Incorporating role-playing prompts to capture diverse viewpoints would more accurately reflect the varied opinions and perspectives that different individuals may have regarding the same video. Relying on a single prompt template for evaluation is limiting and does not adequately represent the range of possible reactions and insights.
3. From the content of Algorithm 1's pseudocode, it appears that there is a lack of task-specific algorithmic innovation in the application of reinforcement learning for fine-tuning. The approach seems to follow standard practices without introducing novel techniques tailored to the specific challenges of the task at hand.
4. While the proposed VIDEORM model demonstrates strong performance across various metrics, the paper lacks interpretability experiments. Including such experiments would provide deeper insights into the model's decision-making process and enhance the overall understanding of its effectiveness.
Technical Quality: 3
Clarity: 3
Questions for Authors: Weakness:
1. Although existing research has demonstrated that GPT-4 can be used for data annotation, it is essential to perform a sample check of the annotated data to assess its quality. Relying solely on the literature to support the use of GPT-4 for annotation without conducting thorough spot checks and corrections undermines the reliability of the annotated content. Ensuring the accuracy and quality of annotations through systematic verification is crucial for maintaining the integrity of the dataset.
2. Furthermore, the evaluation of generated videos should consider perspectives from multiple roles. Incorporating role-playing prompts to capture diverse viewpoints would more accurately reflect the varied opinions and perspectives that different individuals may have regarding the same video. Relying on a single prompt template for evaluation is limiting and does not adequately represent the range of possible reactions and insights.
3. From the content of Algorithm 1's pseudocode, it appears that there is a lack of task-specific algorithmic innovation in the application of reinforcement learning for fine-tuning. The approach seems to follow standard practices without introducing novel techniques tailored to the specific challenges of the task at hand.
4. Although the proposed VIDEORM model demonstrates strong performance across various metrics, the paper lacks interpretability experiments. Including such experiments would provide deeper insights into the model's decision-making process and enhance the overall understanding of its effectiveness.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The preference annotation in this study relies entirely on GPT-4, which is not advisable. While manual annotation is costly, the authors could employ individuals from diverse fields, genders, and age groups to annotate preferences according to a standardized procedure from their unique perspectives. These annotated samples could then be used as prompts for GPT-4 to perform further annotations, potentially improving the quality. Additionally, designing a richer set of prompt templates to capture preferences from multiple perspectives would enhance the dataset's inclusivity and robustness. The current approach lacks the diversity and comprehensive representation necessary for a truly inclusive dataset.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your detailed feedback. We address your feedback point by point below.
---
>**Q1**: Although existing research has demonstrated that GPT-4 can be used for data annotation, it is essential to perform a sample check of the annotated data to assess its quality.
**A1**: Thank you for your insightful suggestions. To further validate the reliability of VideoPrefer, we randomly selected 2,000 samples from VideoPrefer and invited six human experts to conduct human preference scoring on two dimensions: prompt-following and video quality. We analyzed the correlation between the human experts' scores and GPT-4V's scores, which are presented in Table A. We found that GPT-4V exhibited excellent alignment with human judgment in both aspects (close to or exceeding 70%). A 70% agreement rate among qualified human preference annotators is a widely recognized benchmark across multiple fields, including preference annotations in NLP[1] and text-to-image[2], further demonstrating the reliability of VideoPrefer.
**Table A.** Correlations between GPT-4 V and human preference judgment across two aspects.
| Prompt-Following | Video-Quality |
| ---------------- | ------------- |
| 69.65% | 73.97% |
**Reference**
[1] Cui, Ganqu, et al. "Ultrafeedback: Boosting language models with high-quality feedback." ICLR 2024.
[2] Xu, Jiazheng, et al. "Imagereward: Learning and evaluating human preferences for text-to-image generation." NeurIPS 2024.
---
>**Q2**: Furthermore, the evaluation of generated videos should consider perspectives from multiple roles. Incorporating role-playing prompts to capture diverse viewpoints would more accurately reflect the varied opinions and perspectives that different individuals may have regarding the same video. Relying on a single prompt template for evaluation is limiting and does not adequately represent the range of possible reactions and insights.
**A2**: All the authors agree that your suggestion is very meaningful. We will incorporate your insights into our future research to achieve a more comprehensive evaluation of generated videos. Thank you for your valuable feedback.
---
>**Q3**: From the content of Algorithm 1's pseudocode, it appears that there is a lack of task-specific algorithmic innovation in the application of reinforcement learning for fine-tuning. The approach seems to follow standard practices without introducing novel techniques tailored to the specific challenges of the task at hand.
**A3**: The primary innovation of this paper lies in demonstrating that MLLMs can provide effective human preference annotation information for text-to-video generation. Based on this, we propose the most comprehensive video preference dataset, VideoPrefer, and the best-performing reward model, VideoRM. Algorithm 1 introduces a framework for fine-tuning text-to-video models with VideoRM, which is **not our main innovation** and indeed lacks task-specific algorithmic innovation. However, **we included it to offer a feasible approach for applying VideoRM to the alignment of text-to-video models**. This framework is **simple** and **effective**, allowing VideoRM to be used for aligning text-to-video models while avoiding the complex application strategies and high computational costs required in previous work[1] for applying image reward models to text-to-video model alignment.
In future research, we will explore and design more novel framework algorithms to achieve better fine-tuning for the alignment of text-to-video models.
**Reference**
[1] Yuan, Hangjie, et al. "InstructVideo: instructing video diffusion models with human feedback." CVPR 2024.
---
>**Q4**: While the proposed VIDEORM model demonstrates strong performance across various metrics, the paper lacks interpretability experiments. Including such experiments would provide deeper insights into the model's decision-making process and enhance the overall understanding of its effectiveness.
**A4**: Thank you for your interesting suggestion! We will conduct interpretability analyses and experiments on VideoRM in future research to enhance the study's comprehensiveness and completeness, and use these insights to design improved reward models for the text-to-video domain.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response and detailed explanations. I have updated my rating to 8. | Summary: Recent advancements in text-to-video generative models, such as Sora, have shown impressive capabilities, generating significant interest for their potential applications. However, these models often rely on extensive datasets of variable quality, resulting in generated videos that may lack aesthetic appeal and fail to accurately reflect the input text prompts. To address this, leveraging Reinforcement Learning from Human Feedback (RLHF) aims to align model outputs with human preferences, but the high costs of manual annotation have limited comprehensive preference datasets. This study investigates the efficacy of Multimodal Large Language Models (MLLMs) for generating annotations, finding a high degree of concordance with human judgments. Building on this, the study uses MLLMs to perform fine-grained video preference annotations, creating VIDEOPREFER, a dataset with 135,000 annotations. Utilizing this dataset, the authors introduce VIDEORM, the first general-purpose reward model for video preference in the text-to-video domain. Comprehensive experiments validate the effectiveness of both VIDEOPREFER and VIDEORM, representing a significant advancement in aligning text-to-video models with human preferences.
Strengths: 1. **Advancing Text-to-Video Generative Models with Synthetic Feedback**: This work addresses an important direction in the field by leveraging synthetic feedback to enhance text-to-video generative models. The incorporation of Reinforcement Learning from Human Feedback (RLHF) aims to align model outputs more closely with human preferences, which is crucial for improving the quality and relevance of generated videos. This approach helps overcome the limitations of variable-quality datasets and enhances the models' ability to produce aesthetically appealing and contextually accurate videos.
2. **Significant Contribution through Comprehensive Preference Dataset**: The authors have compiled a preference dataset for text-to-video generative models, consisting of 14,000 prompts, 54,000 videos, and 135,000 preference choices. This dataset, named VIDEOPREFER, represents a significant contribution to the field of text-to-video generation.
3. **VIDEORM Reward Model for Fine-Tuning**: Building on the preference dataset, the authors have trained a reward model named VIDEORM. This model is tailored specifically for video preference in the text-to-video domain and can significantly aid subsequent text-to-video generative models in fine-tuning with high-quality data.
Weaknesses: 1. **Representativeness of Generated Video Quality**: Over 98% of the videos in VIDEOPREFER are generated using open-source models (as indicated in Table 5). This high percentage raises a crucial question about the extent to which the quality rankings of these synthetic videos can represent the quality rankings of real videos. It is important to clarify that the concern is not whether GPT-4 V aligns with human judgment, but rather how well the preference dataset and the resulting reward model genuinely capture and represent real-world video quality.
2. **Quality Demonstration through Video Demos for VIDEOPREFER**: The quality of the VIDEOPREFER dataset cannot be fully assessed through the images shown in Figure 10 alone. To better judge the dataset's quality, it is recommended that the authors provide 5-10 videos from the dataset. These demos would offer a clearer and more comprehensive understanding of how well the dataset captures human video preferences and the overall quality of the included videos.
3. **Assessment of VIDEORM through Comparison Demos**: Figure 11 showcases images selected by VIDEORM, but these images alone are insufficient to evaluate the actual quality of the selected videos. To properly assess the effectiveness of the VIDEORM reward model, the authors are recommended to provide 3 comparison sets of the selected videos by different reward models. These demos would allow for a more accurate judgment of the reward model's value and its ability to select high-quality videos that align with human preferences.
4. **Effectiveness of Fine-Tuning with VIDEORM Demonstrated through Demos**: The images in Figure 12 illustrating the results of the VIDEORM fine-tuned model do not adequately convey the actual video quality. To better demonstrate the effectiveness of fine-tuning with the proposed reward model, the authors are recommended to present 3 sets of comparison videos. These demos would offer a concrete demonstration of how fine-tuning with VIDEORM improves video generation quality, thus providing a more tangible proof of the model's effectiveness.
By addressing these points and providing the suggested video demos, the study could offer a more comprehensive and transparent evaluation of the proposed methods and datasets, significantly enhancing the understanding and validation of the results presented.
Technical Quality: 3
Clarity: 3
Questions for Authors: Please refer to the weaknesses.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: This paper discusses the limitations of choosing hyperparameters, which significantly influence the annotation accuracy of GPT-4 V. In addition, it would be better to also discuss the limitations of VIDEOPREFER and VIDEORM.
Flag For Ethics Review: ['Ethics review needed: Data privacy, copyright, and consent', 'Ethics review needed: Data quality and representativeness']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your detailed feedback. We address your feedback point by point below.
---
>**Q1**: **Representativeness of Generated Video Quality**: Over 98% of the videos in VIDEOPREFER are generated using open-source models (as indicated in Table 5). This high percentage raises a crucial question about the extent to which the quality rankings of these synthetic videos can represent the quality rankings of real videos. It is important to clarify that the concern is not whether GPT-4 V aligns with human judgment, but rather how well the preference dataset and the resulting reward model genuinely capture and represent real-world video quality.
**A1**: Your suggestion is very interesting. In fact, the primary aim of our text is to demonstrate that MLLMs can provide reliable preference annotations for the text-to-video field and to present VideoPrefer, the most comprehensive video preference dataset, as well as the most effective reward model, VideoRM. Therefore, the focus of our research is on improving the quality of synthetic videos (generated by text-to-video models) and their alignment with human preferences. We introduced real videos into VideoPrefer to enhance the dataset's diversity and generalizability.
Moreover, we conducted the following analysis experiment: We randomly selected 500 samples containing real videos from VideoPrefer and conducted a statistical analysis of GPT-4V's preference scores in terms of prompt-following and video quality. We found that the likelihood of real videos receiving high scores was 84.91% and 78.23%, respectively (compared to a random chance of 25%). This indicates that GPT-4V tends to assign higher preference scores to real videos. Additionally, we had six human experts score these samples for preference. We observed that the likelihood of real videos receiving high scores was 82.33% and 88.29%, respectively (compared to a random chance of 25%). This alignment with GPT-4V as the scorer demonstrates that VideoPrefer and VideoRM can effectively capture the high quality of real videos.
---
>**Q2**: Quality Demonstration through Video Demos for VIDEOPREFER & Assessment of VIDEORM through Comparison Demos & Effectiveness of Fine-Tuning with VIDEORM Demonstrated through Demos.
**A2**: All the authors fully agree with your suggestion to provide more demonstrations comparing VideoPrefer, videos selected by different reward models, and videos generated through fine-tuning with VideoRM. In the latest version of the paper, we will enhance these demos to provide a more robust and effective evaluation and presentation of the quality of our research.
---
> **Q3**: This paper discusses the limitations of choosing hyperparameters, which significantly influence the annotation accuracy of GPT-4 V. In addition, it would be better to also discuss the limitations of VIDEOPREFER and VIDEORM.
**A3**: Thank you for your suggestions. We will include content about the limitations of VideoPrefer and VideoRM in the limitations section. A concise summary is as follows:
Although VideoPrefer is currently the largest preference dataset in the text-to-video domain, we believe it can be further scaled to achieve better alignment results. In the future, we plan to scale it further. Additionally, we will explore designing more optimal reward model architectures to support more effective video feature modeling, more robust preference prediction capabilities, and more efficient video processing.
---
Rebuttal Comment 1.1:
Comment: Thank you for the detailed response. While I appreciate the effort to address my concerns, several issues remain that warrant further improvement.
Firstly, regarding the representativeness of generated video quality, the analysis provided on the preference scores for real videos is informative, but it does not fully address the core concern. I am not questioning whether GPT-4 V aligns with human judgment, but rather whether the trained VideoRM has the capability to do so. The high percentage of synthetic videos in the dataset may still limit the generalizability of the reward model to practical applications. Scaling up the dataset to include more real video data might make the resulting reward model more robust in guiding the model to generate more realistic videos.
Secondly, although the decision to enhance more video demonstrations in a future version of the paper does not help with the current review, I still hope the authors pursue this to further improve the quality of the paper.
Lastly, regarding the discussion on hyperparameters, are there any strategies to mitigate their impact? Additionally, the summarized discussion of limitations in the response is appreciated, but it feels somewhat superficial. Could you provide more concrete examples or a deeper exploration of the potential limitations?
In conclusion, while the rebuttal addresses some concerns, there remain areas where the paper could be improved. I encourage the authors to consider these points carefully in the revision process.
---
Reply to Comment 1.1.1:
Title: Response to Reviewer qjMG
Comment: Thank you for providing us with detailed feedback. We have greatly benefited from your insights. Below are our responses:
>**Q1**: I am not questioning whether GPT-4 V aligns with human judgment, but rather whether the trained VideoRM has the capability to do so. The high percentage of synthetic videos in the dataset may still limit the generalizability of the reward model to practical applications. Scaling up the dataset to include more real video data might make the resulting reward model more robust in guiding the model to generate more realistic videos.
**A1**: All authors agree with your suggestions. We plan to release the second version of VideoPrefer in the future. In this upcoming version, one of our primary improvements will be to increase the proportion of real videos in the dataset, thereby making the reward model more robust. We outline the specific steps as follows:
**Step 1**: Use well-established event detection models (e.g., [mmaction2](https://github.com/open-mmlab/mmaction2) to segment existing large-scale real video datasets (such as [Kinetics-700](https://paperswithcode.com/dataset/kinetics-700)) into multiple smaller video clips.
**Step 2**: Generate captions for the obtained real video clips using a mature and reliable large model, e.g., GPT-4V.
**Step 3**: Input the generated captions into randomly selected video generation models to produce corresponding synthetic videos.
**Step 4**: Combine the captions, real video clips, and the generated synthetic videos to create new VideoPrefer data examples.
By implementing these steps, we will be able to increase the proportion of real videos in the dataset.
---
>**Q2**: Secondly, although the decision to enhance more video demonstrations in a future version of the paper does not help with the current review, I still hope the authors pursue this to further improve the quality of the paper.
**A2**: Thank you for the reminder. We will certainly include additional video demonstrations across various aspects (such as the assessment of VideoRM or the results of fine-tuning) in the latest version of the paper, as per your request, to further enhance the quality of the paper and improve the visualization of the dataset and experimental results.
---
>**Q3**: regarding the discussion on hyperparameters, are there any strategies to mitigate their impact?
**A3**: One approach we have considered is to mitigate the impact of hyperparameter selection by conducting multiple annotations for each sample using GPT-4V. Specifically, we would annotate each sample N times, with the hyperparameters randomly sampled for each annotation. The final preference score for the sample would be the average of these N scores. We believe this method could help reduce the influence of hyperparameter choices to some extent. Of course, this is a very interesting research problem in itself, and we will continue exploring possible solutions to further improve the accuracy of the annotation.
---
>**Q4**: Additionally, the summarized discussion of limitations in the response is appreciated, but it feels somewhat superficial. Could you provide more concrete examples or a deeper exploration of the potential limitations?
**A4**: Firstly, inspired by your feedback, we recognize a potential limitation of our dataset: the relatively low proportion of real videos, which might affect the robustness of the trained VideoRM (a specific solution for this can be found in **A1**). Another possible limitation we have identified is the lack of a comprehensive bias assessment for the VideoPrefer dataset in our current research. For instance, it remains unclear whether GPT-4V exhibits a noticeable preference for certain types of videos. Although we have analyzed the preference between real and synthetic videos within VideoPrefer and found that GPT-4V shows a strong preference for real videos (93% vs. 70%), further detailed analysis is needed. This includes assessing whether there is a preference for videos generated by specific models or for videos of certain styles. We plan to conduct more in-depth analyses in future research to address these potential limitations.
---
**If you have any further questions or concerns, please feel free to contact us at any time. We are always available and look forward to further discussions with you. :)**
Best regards,
All Authors | Summary: The paper presents a dataset and a learned reward model to better align the generated outputs of text-to-video models with human preferences. Specifically, the authors make use of GPT-4V to provide preference scores for a large dataset of videos by learning from human feedback on a smaller set of videos. The authors then develop their proposed architecture for the reward model on top of this dataset to predict preference scores given prompts and videos. They fine-tune text-to-video models with their proposed reward model and show the improvements in alignment with human feedback through quantitative evaluations, ablation experiments, qualitative results, and user studies.
Strengths: 1. The proposed idea of leveraging an LLM such as GPT-4V to create a large-scale synthetic dataset for the text-to-video paradigm is useful. It makes good use of existing technology to explore previously intractable problems.
2. The reward model design is technically sound and rigorous.
3. The experiments presented in the paper are extensive, covering multiple LLMs for dataset creation, multiple feature representations for text-to-video models, and multiple datasets for showing model performances.
Weaknesses: 1. Some key aspects of the dataset collection process are unclear.
1.1. While GPT-4V has the best agreement with human feedback, it is still not very high (around 70%, according to the authors). Did the authors explore any prompt-tuning or other approaches to improve the agreement? Did the authors observe any potential patterns in the disagreement? Perhaps certain categories of videos have more agreement between GPT-4V and human feedback than other categories?
1.2. When collecting data from video caption datasets, do the authors process the captions in some way to make them structurally and semantically similar to video generation prompts?
1.3. Did the authors run any data filtering for discriminatory, harmful, or abusive content in the collected/generated prompts and videos?
1.4. Did the authors consider various levels of complexity in the prompts, such as foreground and background details, number of objects, actions, interactions, etc.?
1.5. How did the authors quality-check the videos generated for the proposed dataset? It is not fully clear how the proposed approach avoids a cyclic process, where improving video generation depends on a better reward model, which, in turn, requires generated videos of a certain quality.
1.6. Can the authors please clarify the specific numbers regarding the dataset size? How did they go from 14K data items to 135K preference choices?
2. Some key aspects of the user study are unclear, making it hard to fully follow the results.
2.1. Is five participants a sufficiently high number from which to draw conclusions? How many videos did each participant respond to? Was there a mix of non-experts and subject matter experts such as content creators or video editors? If so, were their responses segregated in any way?
2.2. Did the authors check for participants' engagement, e.g., whether participants responded too quickly or too slowly, and whether any such responses needed to be discounted? Did the authors check for any recency bias or response drifts, where the participants may have responded differently to similar video content over time?
Technical Quality: 3
Clarity: 2
Questions for Authors: 1. In their ablation study, why do the authors consider up to 12 frames? Is this limit due to computational constraints?
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: Yes, but needs more details on how the authors filter for discriminatory, harmful, or abusive content in their collected dataset.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your feedback. Due to space constraints, we address your questions in the **Rebuttal** below as well as at the top of this page in the **Author Rebuttal**:
---
>**Q1**: While GPT-4V has the best agreement with human feedback, it is still not very high. Did the authors explore any prompt-tuning or other approaches to improve the agreement? Did the authors observe any potential patterns in the disagreement?
**A1**: **A 70% agreement rate among qualified human preference annotators is a widely recognized benchmark across multiple fields, including preference annotations in NLP[1] and text-to-image[2].** This means that when different human experts provide preference annotations for the same sample, their agreement rate is typically around 70%. With GPT-4V achieving an agreement rate of 69.65%, we consider it a reliable preference annotator in the text-to-video domain.
We explored the impact of different input templates and hyperparameters (as discussed in Section 5) on the accuracy of GPT-4V annotations and identified the most suitable input template and hyperparameters. We believe that exploring prompt-tuning or other approaches to improve agreement is a very interesting research topic, and we plan to explore this further in future research.
Additionally, our analysis shows that in VideoPrefer samples containing real videos, the agreement between GPT-4V and human feedback is higher, reaching approximately 93% (compared to about 70% in samples without real videos). Both GPT-4V and human feedback tend to assign higher preference scores to real videos in these samples, indicating that the quality of videos generated by existing models still needs improvement compared to real videos.
**Reference**
[1] Cui, Ganqu, et al. "Ultrafeedback: Boosting language models with high-quality feedback." ICLR 2024.
[2] Xu, Jiazheng, et al. "Imagereward: Learning and evaluating human preferences for text-to-image generation." NeurIPS 2024.
---
>**Q2**: Do the authors process the captions to make them similar to video generation prompts?
**A2**: We did not apply any special processing to the captions; instead, we used them directly as input prompts for the video generation model to generate videos. We believe this approach ensures that samples in VideoPrefer, composed of real and generated videos, fairly correspond to the same prompt. If we were to preprocess the prompts, we are concerned that differences between prompts corresponding to generated and real videos might introduce potential bias into the final preference results.
---
>**Q3**: Did the authors run any data filtering for prompts and videos? Needs more details on how the authors filter for discriminatory, harmful, or abusive content in their collected dataset.
**A3**: The prompts in VideoPrefer are primarily sourced from the VidProM dataset, as well as the MSR-VTT and ActivityNet video caption datasets. The VidProM dataset has already implemented harmful content filtering to screen prompts (consisting of toxicity, obscenity, identity attack, etc.). Since MSR-VTT and ActivityNet are widely used video caption datasets released some time ago, we believe they contain minimal harmful information. Therefore, we did not perform additional harmful content filtering on the prompts. Similarly, the videos in VideoPrefer are mainly sourced from some open-source video generation models and video caption datasets, and we have not applied data filtering to them. Your suggestion is valuable, and we will incorporate harmful content filtering for both prompts and videos in the v2 version of VideoPrefer to ensure the dataset is more legally compliant and appropriate.
---
>**Q4**: Did the authors consider various levels of complexity in the prompts?
**A4**: In this study, VideoPrefer directly uses the existing prompts from the VidProM dataset, as well as the MSR-VTT and ActivityNet video caption datasets, without considering complexity levels. Your suggestion is valuable :), and we plan to distinguish and categorize the complexity levels of prompts in future research. We will make the necessary adjustments to enhance the effectiveness of VideoPrefer.
---
>**Q5**: How did the authors quality-check the videos generated for the proposed dataset? It is not fully clear how the proposed approach avoids a cyclic process, where improving video generation depends on a better reward model, which, in turn, requires generated videos of a certain quality.
**A5**: **The quality check for the generated videos primarily relies on GPT-4V, i.e., MLLMs**. VideoPrefer is essentially a preference dataset in the text-to-video domain, where each sample includes a prompt and four video candidates corresponding to that prompt. GPT-4V scores these four videos on two dimensions: prompt-following and video quality (see Section 2.2). The comparison of preference scores between high-scoring and low-scoring videos helps us train an effective reward model (see Section 3.2) to assess the degree of human preference for videos.
>**Q6**: Can the authors clarify the the dataset size? How did they go from 14K data items to 135K preference choices?
**A6**: Our VideoPrefer dataset contains 14k prompts, each with four video candidates. These four video candidates are scored by GPT-4V on two dimensions: prompt-following and video quality. This allows us to perform pairwise comparisons across these two dimensions, allowing for 12 pairwise preference comparisons per prompt. Therefore we obtain 14k × 12 = 168k preference choices. After filtering out comparisons where the preference scores were equal (as these cannot be used to optimize the reward model), we finalized 135k preference choices.
---
>**Q7**: Are five participants sufficient? How many videos did each participant respond to? ...?
>**Q8**: Did the authors check for participants' engagement? ...?
>**Q9**: In their ablation study, why do the authors consider up to 12 frames?
**A**: Pleasee see in the **Author Rebuttal**.
---
Rebuttal Comment 1.1:
Title: Response to Authors
Comment: I thank the authors for their detailed rebuttal, which addresses all my concerns. I maintain my original recommendation for acceptance and have slightly raised my score. | Summary: This paper introduces a new dataset called VideoPrefer, a collection of (simulated) human preferences on videos conditioned on certain language prompts. VideoPrefer utilizes GPT-4v as an automatic reward assessor, which contains videos from both machine-generated and real-world curated videos. The work then utilizes the collected preference as a reward to learn a reward model named VideoRM, constructed on top of a prior model HPS v2, extending its capabilities to temporal dimension and thus can present reward modeling in videos.
Strengths: - This work collects a large scale of preference data, potentially useful for evaluating text-to-video generative models.
- It is demonstrated that the designed reward model VideoRM is showing the most aligned preferences over a few baseline methods.
- Finetuned text-to-video models utilizing the presented VideoRM seems effective under the designed experimental settings.
Weaknesses: - While I appreciate the Table 2’s analysis on TVGE dataset among the tested video assessor candidates, solely performing statistical analysis on the VideoPrefer dataset is still required. People would be relying on the VideoRM which is trained on the VideoPrefer data and it remains questionable how faithful the dataset is.
- Lack of some analysis between real-world and generated videos. Under the same prompt, are real-world videos always or more likely to be better? What heuristics can people use to expand beyond the collected video sets?
- Is it unclear how text prompts are aligned with real-world videos in Section 2.2.
- Lack of rigorous statistical analysis on the curated video prompts. How diverse are they? What is the type-token ratio? What are the top frequent predicates and affected entities? What are the genres? It is hard to understand how challenging and/or how underlying bias would have affected the video generation process from this curated resource.
- Why only consider extension of HPS v2 as the base reward model? There are plenty of good video-language models that can be potentially useful (such as [1]) to be adapted as a reward model. If the work can show more unbiased learning from the dataset solely with video-based preference models, it would strengthen the work more.
[1] Sung, Yi-Lin, Jaemin Cho, and Mohit Bansal. "Vl-adapter: Parameter-efficient transfer learning for vision-and-language tasks." CVPR 2022.
Technical Quality: 3
Clarity: 2
Questions for Authors: - There are some minor typos, for example, L84 the “K” is already a thousand there. Please be more mindful when writing.
- What or who is deciding the “win” in Figure 2? The GPT-4v assessor?
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: - The limitations of this work do not seem to be explicitly addressed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your feedback. Due to space constraints, we address your questions in the **Rebuttal** below as well as at the top of this page in the **Author Rebuttal**:
---
> **Q1**: Solely performing statistical analysis on the VideoPrefer dataset is still required. It remains questionable how faithful the dataset is.
**A1**: Thank you for your insightful suggestions. To further validate the reliability of VideoPrefer, we randomly selected 2,000 samples from VideoPrefer and invited six human experts to conduct human preference scoring on two dimensions: prompt-following and video quality. We analyzed the correlation between the human experts' scores and GPT-4V's scores, which are presented in Table A. We found that GPT-4V exhibited excellent alignment with human judgment in both aspects (close to or exceeding 70%), further demonstrating the reliability of VideoPrefer.
Additionally, we provide a comprehensive statistical analysis and presentation of VideoPrefer in Section B of the appendix, including the distribution of preference scores, the distribution of video sources, and more. In the latest version of the paper, we will also include more detailed statistical analyses of VideoPrefer, such as the score distribution between real and generated videos, statistical analysis of the prompts collection, and bias analysis. These will serve as references for those using VideoPrefer.
**Table A.** Correlations between GPT-4 V and human preference judgment.
| Prompt-Following | Video-Quality |
|-|-|
| 69.65%|73.97%|
---
>**Q2**: Under the same prompt, are real-world videos always or more likely to be better? What heuristics can people use to expand beyond the collected video sets?
**A2**: We conducted the following analysis: We randomly selected 500 samples containing real videos from VideoPrefer and performed a statistical analysis of GPT-4V's preference scores on prompt-following and video quality. We found that real videos had a high likelihood of receiving top scores, at 84.91% and 78.23%, respectively (compared to 25% in a random scenario). This indicates that GPT-4V tends to assign higher preference scores to real videos. To explore whether this phenomenon is reasonable, we asked six human experts to also score these samples. We found that the likelihood of real videos receiving high scores was 82.33% and 88.29% in prompt-following and video quality, respectively (25% in a random scenario). This indicates that real videos generally receive higher preference scores, also highlights that the current quality of video generation models still needs improvement.
Additionally, this inspired us to consider generating more comparison videos using video generation models for each sample containing real videos and using real videos as cases that typically receive higher human preference scores to rapidly expand the preference dataset. We will further explore the effectiveness and potential limitations of this method in future research.
---
>**Q3**: Is it unclear how text prompts are aligned with real-world videos in Section 2.2.
**A3**: In fact, we use the current video caption dataset directly as the source for obtaining text prompts and corresponding real video pairs. Specifically, for each prompt in the video caption dataset, we generate three additional video candidates using a randomly sampled generative model. These generated video candidates, along with the original prompt and the corresponding real video clip, together constitute a single sample of VideoPrefer.
---
> **Q4**: Why only consider extension of HPS v2 as the base reward model? There are plenty of good video-language models that can be potentially useful to be adapted as a reward model.
**A4**: The reasons for selecting the extension of HPS v2 as the base reward model are as follows:
1. **High Performance and Low Bias**: HPS v2 is the best-performing reward model trained on the largest debiased preference dataset in the text-to-image domain, with minimal bias. Since video preference scores are highly correlated with the preference scores of each frame in the video, HPS v2 naturally serves as an effective reward model for the video generation domain[1]. By using HPS v2 as the initial model, we provide our reward model with a strong initial point and foundational knowledge, thus enhancing the effectiveness of the reward model.
2. **Efficient Deployment**: During the alignment process of video generation models, the deployment cost of the reward model (in terms of parameters, computational load, and inference time) can significantly impact the alignment performance under resource constraints. HPS v2 has a much lower deployment cost compared to some video-language models, making it easier to deploy during the model alignment process.
We also finetuned the video-language models you recommended[2] (denoted as VL-Model) on both open-source human-crafted preference datasets and VideoPrefer, using the same training steps & data as VideoRM. The preference prediction accuracy is shown in Table B. We found that its performance was not as good as VideoRM, which further demonstrates the effectiveness of using HPS v2 from the text-to-image domain as the foundation for our reward model.
**Table B.**
|Model|TVGE|VBench|T2VQA-DB|
|-|-|-|-|
|HPSv2|69.5|55.7|52.8|
|VideoRM|**73.7**|**63.5**|**65.4**|
|VL-Model|64.44|53.38|53.09|
**Reference**
[1] Yuan, Hangjie, et al. "InstructVideo: instructing video diffusion models with human feedback." CVPR 2024.
[2] Sung, Yi-Lin, et al. "Vl-adapter: Parameter-efficient transfer learning for vision-and-language tasks." CVPR 2022.
> **Q5**: There are some minor typos.
**A5**: We apologize for the typos and will correct them in the latest version of the paper.
---
> **Q6**: What or who is deciding the “win” in Figure 2?
**A6**: Pleasee see in the **Author Rebuttal**.
---
>**Q7**: The limitations do not seem to be explicitly addressed.
**A7**: Pleasee see in the **Author Rebuttal**.
---
Rebuttal Comment 1.1:
Comment: Thanks for the responses.
Some of the points addressed my doubts, however, the original manuscript was a bit far from publication ready.
I retain my score.
---
Reply to Comment 1.1.1:
Title: Thanks
Comment: Dear Reviewer nZ7n
We would like to sincerely thank you for your feedback and suggestions :). We will carefully revise our paper, addressing all issues (e.g., typos and adding statistical analysis on the curated video prompts) to make the paper clearer and more complete.
Best regards,
All Authors | Rebuttal 1:
Rebuttal: ## Supplementary rebuttal for Reviewer nZ7n
---
> **Q6**: What or who is deciding the “win” in Figure 2? The GPT-4v assessor?
**A6**: The "win" in Figure 2 is determined by scores from six human experts. The sample with the highest average expert score is considered the ground truth. We apologize for omitting this explanation in the paper, and we will include it in the revised version.
>**Q7**: The limitations do not seem to be explicitly addressed.
**A7**: In Section 5 and Figure 8 of our paper, we have conducted extensive comparative ablation experiments and analyses for choosing hyperparameters. We hope this can serve as a reference for future similar works when selecting hyperparameters. A relatively efficient solution, in our opinion, is to first use different hyperparameters to generate small-scale MLLM annotations and assess their quality. Once the best hyperparameter combination is identified, large-scale annotation generation can proceed (this is the method we used to construct VideoPrefer).
We will include this limitation as a separate section in the updated version of our paper. We also believe that finding efficient and effective methods for choosing hyperparameters in MLLM annotation generation is a very interesting research direction, and we plan to conduct further studies in this area.
##
## Supplementary rebuttal for Reviewer snrb
---
>**Q7**: Is five participants a sufficiently high number from which to draw conclusions? How many videos did each participant respond to? Was there a mix of non-experts and subject matter experts such as content creators or video editors? If so, were their responses segregated in any way?
**A7**: **Based on previous related work, where three participants were used for human studies in the text-to-image domain[1] and the NLP domain[2], we believe that having five participants rate the videos is sufficient to support our conclusions.** Each participant rated all the results, and every generated video used for evaluation was scored by all participants. The final score for each video was determined by the average score from all participants.
Among the five participants, there were three experts (in image aesthetics, video aesthetics, and image quality) and two non-experts to ensure a more comprehensive and accurate evaluation. During the evaluation process, the five participants did not engage in any form of discussion or communication. Thank you for your question, and we will include these details in the updated version of the paper.
**Reference**
[1] Xu, Jiazheng, et al. "Imagereward: Learning and evaluating human preferences for text-to-image generation." NeurIPS 2024.
[2] Cui, Ganqu, et al. "Ultrafeedback: Boosting language models with high-quality feedback." ICLR 2024.
---
>**Q8**: Did the authors check for participants' engagement, e.g., whether participants responded too quickly or too slowly, and whether any such responses needed to be discounted? Did the authors check for any recency bias or response drifts, where the participants may have responded differently to similar video content over time?
**A8**: We did not check the time each participant took to respond to ensure they didn't respond too quickly or too slowly. However, we implemented a **Post-Labeling Check** mechanism for each participant to ensure the reliability of the final results. Participants were required to review 20% of randomly selected samples for quality checking and rescore them. If the scores differed by more than 25%, that participant's score for the video would not be considered.
---
>**Q9**: In their ablation study, why do the authors consider up to 12 frames? Is this limit due to computational constraints?
**A9**: We considered using up to 12 frames based on the conventional settings from previous works that employed models with clip-like structures for video-related tasks[1,2]. Subsequently, we increased the number of frames to 24 and 32 for further implementation. The experimental results are presented in Table A, where we observed that the performance of VideoRM improved with an increased number of frames. However, considering the deployment cost and speed of aligning video generation models, as well as the performance of VideoRM, we determined that using 8 frames is the most suitable choice.
**Table A. Pair-wise preference prediction at TVGE dataset for VideoRM when using different input frames.**
| frames | 4 | 8 | 12 | 24 | 32 |
| ----------------------- | ---- | ---- | ---- | ---- | ---- |
| **prediction accuracy** | 72.8 | 73.7 | 73.5 | 74.0 | 74.4 |
**Reference**
[1] Luo, Huaishao, et al. "Clip4clip: An empirical study of clip for end to end video clip retrieval and captioning." Neurocomputing 2022.
[2] Wang, Mengmeng, et al. "Actionclip: Adapting language-image pretrained models for video action recognition." TNNLS 2023. | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
LaSe-E2V: Towards Language-guided Semantic-aware Event-to-Video Reconstruction | Accept (poster) | Summary: This paper proposes a language-guided event-based video reconstruction method LaSe-E2V, which introduces the popular language model into event-based imaging tasks. The LaSe-E2V is generally based on a diffusion model. In order to further improve the method, this paper proposes a series of designs, such as event-guided spatio / temporal attention for the fusion between event and previous reconstructed frames, previous frame conditioning for ensuring consistency, event-aware mask loss, and event-aware noise initialization. Experimental results demonstrate the effectiveness of the proposed methods.
Strengths: 1. This paper introduces the popular language model into the event-based imaging community, providing new ideas for subsequent works.
2. The event-guided spatio / temporal attention provides a new idea for event-image fusion.
3. The event-aware mask loss and the event-aware noise initialization are specially designed for event-based imaging based on diffusion models.
Weaknesses: There are some unclear points, please see the Questions part.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. The proposed method requires a description of scenes contained in the event streams to be reconstructed. What if the description of the scene is unknown (For example, a ``blind'' event stream)? Can existing models handle this situation?
2. The running times and GPU memory costs for inference are absent. Diffusion models are still new for the event-based imaging community, and offering these values can help readers better understand the proposed method and do subsequent work.
3. It says in Line 228 of the paper that the definition of the SSIM metric has ambiguity. What's the ambiguity?
4. In the 3rd column of Fig. 4, there are more artifacts in the text part reconstructed by the proposed method than other methods. It's an interesting phenomenon, and it would be better if some explanations were given.
5. There is a typo ``evet'' in the Line 66 of the manuscript.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Please see the Questions part.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank the reviewer for the constructive comments and valuable concerns.
> **Can the model handle the situation when the description of the scene is unknown?**
>
- Yes, our model can handle such scenarios. If the text description is unavailable, our framework defaults to a conventional event-to-video approach. As demonstrated in Tab. 2 (row 1), even without text input, our method **still achieves reasonable performance**.
- Additionally, event-based multi-modality models such as EventBind[77] and EventClip[78] can also be employed to **generate text descriptions directly from the event stream**.
> **About running times and GPU memory costs for inference.**
>
- Thanks for the question. We have included a complexity comparison in **Global Response to All Reviewers.** As illustrated in Tab. A-1, our method requires significant inference time due to the multiple denoising steps, showing a known limitation of diffusion models. However, while diffusion models have been employed in tasks like super-resolution and depth estimation, “*they are still new to the event-based imaging community*”(***uWG8***). As such, our approach could hopefully *“provide new ideas for subsequent works”(**uWG8**)* in this field.
> **Ambiguity of SSIM metric.**
>
- The SSIM metric raises ambiguity because it involves several hyper-parameters that may differ across various codebases. For example, in the `structural_similarity` function of the **skimage** package, parameters like `gaussian_weights` and `sigma` are used for spatial weighting of each patch with a Gaussian kernel, `use_sample_covariance` indicates whether to normalize covariances, and `K1` and `K2` are algorithm-specific parameters that need to be set.
> **About explanations for the artifact of text reconstruction in Fig.4.**
>
- Thanks for the insightful question. Although our method achieves superior overall performance, some artifacts persist in the text part reconstruction. This issue arises because we depend on the prior of the pre-trained diffusion model (SD2 [42]), which also faces challenges in text generation. However, the recently released SD3 [79] claims to show improved text rendering capabilities, which could potentially address this problem.
> **Typo: "evet".**
>
- Thanks for pointing out the typo. We will correct it in the final version.
**Additional Reference:**
[77] Zhou J, Zheng X, Lyu Y, et al. E-clip: Towards label-efficient event-based open-world understanding by clip[J]. arXiv, 2023.
[78] Wu Z, Liu X, Gilitschenski I. Eventclip: Adapting clip for event-based object recognition[J]. arXiv, 2023.
[79] Esser P, Kulal S, Blattmann A, et al. Scaling rectified flow transformers for high-resolution image synthesis[C]//ICML. 2024.
---
Rebuttal Comment 1.1:
Comment: Thanks for your rebuttal. I'll keep my rating. | Summary: The paper explores the use of Denoising Diffusion Models (DDM) for event-based video reconstruction task. The most important contribution, in my opinion, is the improvement in video quality. The researchers adapted an existing model (ModelScope Text-to-Video) for the event-based video reconstruction task. They introduced ESA, that modiffies a conditional inpainting technique (presented on High-Resolution Image Synthesis with Latent Diffusion Models). Additionally, they used text conditioning to further enhance video quality.
The innovative contributions of the paper are two. The fist one is the Event-aware Noise Initialization; This technique enables the use of frame I_(t-1) to reconstruct frame I_(t) during the inference stage (autoregressive).
The second one is the Event-aware Mask Loss; This new loss function is designed to improve the temporal consistensy.
In summary, the contributions are:
1. An existing text-to-video model (diffusion-based) was adapted for the event-based recosntruction task.
2. The model was adapted to event data using a conditional inpainting technique. Also proposed Event-aware Noise Initialization and Mask Loss to improve video quality.
3. The result is a new state-of-the-art in event-to-video reconstruction.
However, the proposed model has several drawbacks. The DDM "hallucinates" content, especially in areas with low or no event data, which can be problematic for applications like object detection in the context of self-driving cars.
Strengths: As mentioned before, the main contribution of this paper is the improvement in video quality. Also, this paper proves that it is possible to use diffusion-based models for the event-based video reconstruction task.
Weaknesses: Diffusion-based models tend to "hallucinate," producing some parts on a frames that are far from reality (in the event-based video reconstruction task), which can be problematic for applications like object detection in the context of safety (self-driving cars).
Another problem is that the proposed model uses text prompts for video generation. Although these text prompts allow some control over the content, they also introduce a lot of ambiguity. This leads to a trial-and-error process (prompt engineering) until the most realistic reconstruction is achieved according to the user.
Additionally, the proposed model requires very high computational resources. Inference times are not specified in the paper, but it is known that the model does not run in real-time.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1) Although traditional metrics (LPIPS, MSE, and SSIM) show an increase in video reconstruction quality, the qualitative results, especially in Figure 3, reveal that the reconstructed frames do not reflect reality in specific areas. These artifacts represent a smaller percentage of the image (in terms of pixels) and are not captured by traditional metrics, how does this affect the overall video quality?
2) In the inference stage, no mention is made of the length (number of frames) that the model can reconstruct. If there is a limit on the number of frames (how many frames and) , how is temporal consistency between reconstructed sections resolved?
3) Despite discussing the model's temporal consistency, no metrics verify this.
4) On line 134, the variable V is introduced, representing the conversion of event data to voxels. However, it is unclear whether V represents a segment with N temporal bins or N segments (the assumption is that V represents N segments). Clarification of this detail is necessary.
5) On line 139, the input latent representation is introduced through the variable \hat{\epsilon} . In DDM, \epsilon variable represents noise, and the noisy input latent representation normaly is denoted with z_t in this case as \hat{z}^{i}_{t}, since it is the latent image after applying the noise and the representation of the events z^{i}_{e}. I'm not sure if it's correct.
6) The Event-guided Spatio-temporal Attention (ESA) module is very similar to the technique used in SD (High-Resolution Image Synthesis with Latent Diffusion Models) for inpainting. A reference to this would be beneficial.
7) In section 4.2, on line 162, the title "Event Spatial Attention" mentions a cross-attention technique. However, the title does not reflect this, causing confusion. It could be changed to "Event Spatial Cross-Attention". Similarly, on line 176, the title "Event Temporal Attention" maybe should be changed to "Event Temporal Cross-Attention."
7) In equation 8, line 199, the value of \(\lambda\) is not mentioned.
8) No mention is made of the computational cost in the inference stage, nor is the inference time mentioned. Could these data be mentioned?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: Due to the DDM's tendency to hallucinate, the model cannot be used in self-driving cars or other computer vision applications where safety is involved.
It is believed that the model cannot be run in real time, much less on an embedded device.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank the reviewer for appreciating our work with valuable suggestions. We address the comments below.
> **About the "hallucination" issue of diffusion-based models.**
>
- Yes, this is indeed a problem for the diffusion models. To address this, we have proposed novel techniques (i.e., ESA module, Event-aware Mask Loss, and Event-aware Noise Initialization) to mitigate the diversity objectives of the diffusion model. The proposed techniques thereby enhance the consistency between events and the video and reduce the hallucination issue.
- Please be noted that, in static areas without event data, the model tends to reconstruct these regions with hallucination. By incorporating text descriptions, the hallucination issue can be mitigated to some extent, as shown in Fig. 7 (left).
> **Text could introduce ambiguity and lead to a rial-and-error process.**
>
- Unlike existing image synthesis pipelines, our method primarily relies on event data, with text descriptions serving as the **supplementary guidance** only when events are too sparse or not trigged. This approach minimizes the potential for ambiguity.
- Additionally, our method requires only **coarse** text descriptions to effectively leverage the semantic prior in the T2I diffusion model. Most of these descriptions simply identify objects, such as "car" or "road," generated by the tagging model RAM, which introduces little ambiguity. As shown in Tab. 1 and Fig. 4, our experiments confirm that this coarse-grained text is effective and does **not** necessitate a trial-and-error process.
> **Computational Resources and Inference Times.**
>
- Thanks for the suggestion. We have included a complexity comparison in **Global Response to All Reviewer*s***. As shown in Tab. A-1, our method requires considerable inference time due to multiple denoising steps, showing a limitation of diffusion models. However, our approach “*could inspire future work in the field*”(***ueNs***) by incorporating language guidance and proves “*it is possible to use diffusion-based models for the event-based video reconstruction task*”(***H8fU***).
> **The artifacts in Fig. 3 are not captured by traditional metrics in Tab. 1.**
>
- The reviewer might misunderstand Fig.3. Fig. 4 presents the qualitative results corresponding to the quantitative data in Tab. 1, derived from the widely-used ECD, MVSEC, and HQF datasets. These results demonstrate a significant improvement in reducing artifacts. Conversely, Fig. 3 illustrates qualitative results from a **different** dataset (HS-ERGB) that involves fast motion.
- Additionally, the traditional metrics (MSE, SSIM, and LPIPS) used in Tab. 1 are widely recognized in recent E2V research. These metrics have been proven to effectively assess results at the pixel level (MSE), structure level (SSIM), and in terms of human perception (LPIPS).
> **How is temporal consistency between reconstructed sections resolved?**
>
- As mentioned in Line 184, we condition the **frame from the previous section** to enable an **auto-regressive** pipeline for long video reconstruction. The ablation study in Tab. 2 (2nd row vs. 3rd row) has demonstrated the effectiveness of the previous frame condition to ensure the temporal consistency between sections. Theoretically, the auto-regressive pipeline imposes **no limitations** on the number of frames.
> **Metrics for temporal consistency.**
>
- Upon the suggestion, we evaluate the results based on the temporal quality metrics from VBench [76]. As shown in Tab. A-4, our method significantly outperforms others on subject consistency and background consistency, while achieving comparable performance on motion smoothness. These results demonstrate the effectiveness of our approach in maintaining temporal consistency. We will include these results in the camera-ready version.
Table A-4: Quantitative Comparison on temporal consistency on ECD based on VBench metrics.
| | Subject Consistency | Background Consistency | Motion Smoothness |
| --- | --- | --- | --- |
| E2VID [40] | 52.14% | 85.26% | 97.62% |
| FireNet [44] | 49.61% | 82.78% | 98.40% |
| E2VID+ [48] | 51.83% | 85.33% | 97.56% |
| FireNet+ [48] | 47.97% | 83.23% | 97.11% |
| SPADE-E2VID [7] | 50.58% | 82.61% | **98.41%** |
| SSL-E2VID [37] | 51.89% | 84.86% | 95.97% |
| ET-Net [54] | 55.49% | 86.85% | 97.72% |
| HyperE2VID [12] | 50.41% | 83.50% | 97.59% |
| **LaSe-E2V (Ours)** | **84.25%** | **93.39%** | 98.11% |
| *GT (Empirical Max)* | *88.29%* | *93.65%* | *98.67%* |
> **Clarification on the Variable "V".**
>
- Thanks for the question. The **V** in bold represents N segments. We will clarify this in the final version.
> **Clarification on the Variable $\epsilon$.**
>
- Yes, the noisy input latent representation is typically denoted as $z_t$ during the training process, while $\epsilon$ refers to the sampling noise input during inference. Sorry for the confusion. We will clarify it in the final version.
> **Adding reference to ESA.**
>
- Thanks for the suggestion. We will add the reference for this module in the revision. While SD [42] serves as a baseline architecture to incorporate text embedding, we introduce ESA to enhance spatial-temporal consistency between the event data and video by considering the unique characteristics of event data.
> **Adding Cross-Attention to section title of ESA.**
>
- Thanks for the insightful comment. We will revise the title in the final version to better reflect the inclusion of cross-attention.
> **About the value of $\lambda$.**
>
- Thanks for the question. The value of $\lambda$ is set to 0.01 for all the experiments. We will clarify it in the implementation details.
**Additional Reference:**
[76] Huang Z, He Y, Yu J, et al. Vbench: Comprehensive benchmark suite for video generative models[C]/CVPR. 2024.
---
Rebuttal Comment 1.1:
Comment: Hallucinations:
I understand that the ESA module and mask loss help mitigate the problem of "hallucinations." However, this information does not present anything new regarding hallucinations. Additionally, as shown in Figure 7, a prompt had to be manually adjusted to generate a similar background, but it still differs from reality.
Ambiguity in Text Prompts:
As shown in Figure 7, when no events are triggered in large regions, it becomes necessary to manually generate a text prompt that, according to the user, matches reality. This introduces ambiguity into the process. Furthermore, even when a close image can be generated with the prompt, the textures and forms often differ significantly from the real ones.
Computational Resources and Inference Times:
Table A-1 clarifies this question; however, it would be helpful to include the memory (VRAM) required to run each model.
Artifacts Not Captured by Traditional Metrics in Figure 3:
Figure 3 shows the qualitative results from the HS-ERGB dataset, but no quantitative results are provided for this dataset. The ECD, MVSEC, and HQF datasets might not capture the artifacts (hallucinations) produced by the proposed LaSe-E2V model because the camera is constantly moving in those scenarios. However, in situations like the HS-ERGB dataset, where both the background and the camera are stationary, the proposed model may yield worse quantitative results for the HS-ERGB dataset.
Additionally, if we look at Figure 4, in the third column, there is a poster with letters that are completely distorted by the proposed LaSe-E2V model, making it impossible to read the content. However, since these are small regions relative to the size of the image, traditional metrics (MSE, SSIM, and LPIPS) do not adequately reflect these artifacts.
Temporal Consistency Between Reconstructed Sections:
Information about the autoregressive pipeline helps clarify doubts related to video generation during the inference stage.
Metrics for Temporal Consistency:
Doubts related to temporal consistency have been clarified.
Reference to ESA:
References regarding the ESA module and the main similarities and differences with respect to SD (in inpainting) were not added.
Remaining Questions:
The rest of the questions were answered satisfactorily.
---
Rebuttal 2:
Title: Author response to Reviewer H8fU (1/2)
Comment: Thank the reviewer for elaborating on the points. We address the reviewer's concerns below:
> **Hallucinations**
>
- Hallucination is a potential **side effect** introduced by incorporating semantic priors from the diffusion model into our framework. In this regard, our proposed techniques are demonstrated to be effective in reducing hallucinations, but **not entirely** eliminating them, which is still a challenge in the diffusion model research community. Extensive experimental results (Tab. 1 and Fig. 4) show that our method significantly **improves fidelity** and reduces hallucinations in the scenes with sufficient event data. Our method also exhibits better reconstruction performance for the scenes even with insufficient event data (Tab. A-6). Please be reminded that it is impossible to totally remove the hallucination effect for regions with insufficient events. However, our method indeed effectively leverages the semantic prior in the diffusion model to reconstruct the image more **closer to the distribution** to the real scene, whereas the previous E2V methods (e.g., HyperE2VID) **fail** in these regions, as shown Fig. 1.
- Our framework mainly focuses on providing a potential pipeline to reconstruct the video from events with the guidance of language. This is based on our finding that language naturally conveys abundant semantic information, which is beneficial in enhancing the **semantic consistency** for the reconstructed video (see Lines 39-41). The text guidance can serve as a form of human intervention through text prompts, which is **similar** to the way of the text-guided denoising [80,81] and super-resolution[82] methods.
- As shown in Fig. 1, it compares the difference between our method and the previous E2V method (i.e., HyperE2VID). For region with insufficient events, although our results still differs from reality in some details, they indeed reconstructs a scene according to human preference. In contrast, the previous E2V method always reconstructs **haze** in these regions. For Fig. 7, we will update the results from previous E2V methods for clearer comparison. Quantitatively, Tab. A-6 also shows the superiority of our method in the scene with insufficient events.
> **Ambiguity in Text Prompts**
>
- Our framework incorporates text descriptions as complementary information, marking the first instance of allowing language guidance in the E2V pipeline. This approach is believed to "*inspire future work*" (***ueNs***) and "*provide new ideas*" (***uWG8***). While this may introduce some ambiguity, it also offers the flexibility to manually adapt the reconstructed video according to user preferences, as demonstrated in Fig. 8. The text guidance manner is also similar to the way of the text-guided denoising [80,81] and super-resolution[82] methods.
- In regions with sufficient event data, our method improves reconstruction performance by leveraging the semantic priors from the text prompts, ensuring high fidelity. For regions with insufficient event data, the method relies solely on the text prompts to reconstruct a scene that aligns with human preferences. Although this approach may produce textures or details that differ from the real ones, it still generates images that are closer to reality. In contrast, the previous E2V models always reconstruct these regions as indistinct haze, far from the true distribution of real images. Fig. 1 shows the qualitative comparison. Tab. A-6 also shows the quantitative comparison and demonstrates the superiority of our method. We will add more comparisons on these scenes with insufficient events in the revision.
> **Computational Resources and Inference Times**
>
- Table A-5 provides the GPU memory for different E2V models. It shows similar memory costs among different methods.
Table A-5: GPU memory cost of different E2V methods. All tests are conducted on one NVIDIA Tesla 32G-V100 GPU.
| Methods | GPU memory |
| --- | --- |
| E2VID [40] | 12120 |
| ET-Net [54] | 12218 |
| HyperE2VID [12] | 12227 |
| LaSe-E2V (Ours) | 12139 |
---
Rebuttal Comment 2.1:
Comment: Hallucinations
While this paper demonstrates improved video reconstruction quality compared to previous works (such as HyperE2VID), it is important to address the side effect of "hallucinations" in the limitations section. These include small artifacts that traditional metrics like LPIPS, MSE, and SSIM cannot capture.
Ambiguity in Text Prompts
Similar to hallucinations, the potential limitations of introducing "ambiguity" with text prompts in the reconstruction pipeline should be discussed. These effects could lead to unsuitable video reconstructions for safety-critical applications, such as self-driving cars, due to the risk of generating nonexistent objects. This concern should also be highlighted in areas like computational photography, where fidelity is crucial.
Computational Resources and Inference Times
Table A-5 does not specify the units for memory requirements—are they in gigabytes or megabytes? Additionally, what precision is being used—float16, float32, or something else? Moreover, the significant differences in the number of parameters between models do not seem to align with their reported memory consumption. For instance, LaSe-E2V is reported to have 1,801 million parameters, while HyperE2VID has 10.15 million. Yet, LaSe-E2V’s memory consumption (12,139 unknown units) is listed as less than HyperE2VID’s (12,227 unknown units). Please address and correct these discrepancies.
---
Reply to Comment 2.1.1:
Title: Author response to Reviewer H8fU
Comment: Thank the reviewer for the insightful discussion! We will include these points in the camera-ready version.
> **Hallucinations**
>
- We will manually identify small artifacts that traditional metrics fail to capture. These findings will be discussed in the Limitations section of the camera-ready version.
> **Ambiguity in Text Prompts**
>
- Considering that our method reliably ensures fidelity in regions with sufficient event data, it is practical to assign a **confidence map** based on event density to identify high-confidence regions. For safety-critical applications, it is feasible to make decisions based on both the reconstructed video and the confidence map. This approach allows for simultaneous consideration of image quality and safety. We will include this limitation and the potential solution in the revision.
> **Computational Resources and Inference Times**
>
- Sorry for the confusion. The unit in the Tab. A-5 is **megabytes** (MB). **Float32** is used for all the methods.
- Previous E2V methods are based on a recurrent architecture. Theoretically, recurrent models require more memory because they initially allocate enough memory to store the previous states, which potentially enhances the inference speed. However, when we clear the pre-allocated memory before each iteration, we observe that memory usage for HyperE2VID drops to 1372MB. There is also a minor decrease in inference speed due to the time required to reallocate memory. This represents a trade-off between memory usage and time cost, potentially influenced by CUDA tools. We will clarify this in the revision.
---
Rebuttal 3:
Title: Author response to Reviewer H8fU (2/2)
Comment: > **Artifacts Not Captured by Traditional Metrics in Figure 3**
>
- Tab. A-6 provides a quantitative comparison based on three sequences (*horse_11*, *horse_12*, *horse_13*) from the HS-ERGB dataset. Existing E2V methods typically fail to reconstruct regions without events, leading to significantly **worse** quantitative results. Although our method may **not perfectly** reconstruct every detail for reality, it does generate a **reasonable** output that aligns with human preference and is generally **close to the distribution** of the real scene. Therefore, while our results on the HS-ERGB dataset may be less significant than those on "constantly moving" datasets (ECD, MVSEC, HQF), our method is still **substantially better** than baseline methods.
- Regarding the artifacts in the letters, as also noted by reviewer ***uWG8***, these issues arise because our method relies on the pre-trained diffusion model (SD2 [42]), which also struggles with text rendering. However, the recently released SD3 [79] claims to have improved text rendering capabilities, which could potentially address this problem and improve the performance. We believe that the MSE can capture these artifacts at **pixel level**. However, although minor artifact in small regions exists, our method excels in **overall** image quality, since previous methods tend to reconstruct misty areas in front of the poster in Fig. 4 (3rd column)
Table A-6: Quantitative comparison of HS-ERGB. Results are conducted on 3 sequences with a total 497 frames.
| Methods | | HS-ERGB | |
| --- | --- | --- | --- |
| | MSE | SSIM | LPIPS |
| E2VID | 0.199 | 0.382 | 0.736 |
| HyperE2VID | 0.161 | 0.374 | 0.745 |
| **LaSe-E2V (Ours)** | **0.078** | **0.429** | **0.665** |
> **Reference to ESA**
>
- Our ESA module is specifically designed to enhance spatio-temporal consistency between events and videos. In contrast, the original SD [42] serves as a baseline attention mechanism for integrating conditional input. Our approach differs in **attention design**. While the original SD relies solely on **simple cross-attention** to incorporate various feature conditions, our ESA module takes into account the unique spatial and temporal characteristics of event data. It introduces two **distinct attention** mechanisms respective to the **spatial** domain and **temporal** domain, which fully leverage the constraints provided by the event data and ensure spatio-temporal consistency.
**Additional Reference:**
[80] Duan H, Min X, Wu S, et al. UniProcessor: A Text-induced Unified Low-level Image Processor[C]//ECCV. 2024.
[81] Qi C, Tu Z, Ye K, et al. Tip: Text-driven image processing with semantic and restoration instructions[C]//ECCV. 2024.
[82] Gandikota K V, Chandramouli P. Text-guided Explorable Image Super-resolution[C]//CVPR 2024. | Summary: This paper addresses the issue of artifacts and regional blur in existing event-to-video (E2V) reconstruction algorithms by leveraging the rich semantic information in language to enhance the semantic consistency of reconstructed videos. The authors propose a language-guided E2V generation model that employs existing text-conditional diffusion models as a framework. They use an Event-guided Spatiotemporal Attention (ESA) module for fine-grained spatial alignment and temporal continuity, an event-aware mask loss for further ensuring temporal consistency, and an event-aware noise initialization to address training-testing discrepancies. Extensive comparative experiments validate the algorithm's performance, and ablation studies demonstrate the effectiveness of each component.
Strengths: 1 This paper is the first to tackle the event data reconstruction task from a language-guided perspective. This approach could inspire future work in the field.
2 The proposed algorithm achieves optimal performance on nearly all metrics across multiple datasets, demonstrating superior visual effects and validating the algorithm's effectiveness.
3 The paper presents a high-quality body of work, including effective methods and extensive experimental validation.
Weaknesses: 1 The paper lacks a detailed comparison of the algorithm's inference speed and model size. The authors acknowledge the inherent speed limitations of using diffusion models, emphasizing the importance of providing this comparison.
2 In practical applications lacking APS reference frames, obtaining accurate textual information that matches the scene description is difficult or nearly impossible. Using existing text generation models to extract semantic information introduces additional reference, compromising fairness to some extent.
Technical Quality: 3
Clarity: 3
Questions for Authors: Why is it necessary for the event activation regions of adjacent frames to be similar, rather than the regions without events?
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors discuss the limitations of their work, particularly concerning the required training data and the inference speed of diffusion models, in accordance with official guidelines.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank the reviewer for the valuable suggestions. We address the questions below.
> **Detailed comparison of inference speed and model size.**
>
- Please refer to the **Global R*esponse to All Reviewers***. As shown in Tab. A-1, our method requires considerable inference time due to multiple denoising steps, showing a limitation of diffusion models. However, our approach “*could inspire future work in the field*”(***ueNs***) by incorporating language guidance and proves “*it is possible to use diffusion-based models for the event-based video reconstruction task*”(***H8fU***).
> **Textual information is difficult to obtain and compromises fairness.**
>
- We respectfully disagree. The textual description is not difficult to obtain for our framework. Please note that our method only requires coarse text descriptions to leverage the capabilities of the pre-trained T2I diffusion model. These descriptions, generated by the tagging model RAM, simply identify objects in a tagging style, such as "car," "road," and "tree". These can be easily provided by humans in the absence of APS frames.
- Also, please note that it is not uncommon to introduce additional priors for event-to-video (E2V) reconstruction task. For instance, Zhang et al. [72] incorporated optical flow to address event-to-image reconstruction as a linear inverse problem. Our work is the first to introduce text as a guiding prior for event-to-video reconstruction, which, as noted by reviewer ***ueNs***, *"could inspire future work in the field".*
> **Why is it necessary for the event activation regions of adjacent frames to be similar?**
>
- Sorry for the misunderstanding. It should be the regions without events to be similar. The value zero is assigned to the event-activated area, and vice versa. We will rectify Line 196 in the version.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response. My concern still lies with the acquisition of the text. All reviewers mentioned that the issue of hallucinations makes reconstruction unreliable. Event cameras are not good at capturing image details but are instead suitable for high-speed, real-time applications, unreliable reconstruction contradicts the motivation for using event cameras. Additionally, the reconstruction method in this paper is nearly 100 times slower than previous event reconstruction algorithms. If we consider the acquisition of text prompts, the application scenarios become even more limited.
If we do not consider the application and focus solely on the method itself, the paper does not explore the impact of different text prompts on the results. The authors mention that "our method requires only coarse text descriptions" and that "this coarse-grained text is effective and does not necessitate a trial-and-error process." However, if the text information is insignificant, then what is the significance of the text in this model? Is it the case that relying solely on a pre-trained diffusion model would already yield good results? This also contradicts the motivation of the paper.
After reading the concerns raised by other reviewers, I found the text-related parts of the experiments confusing. The qualitative results do not indicate the text used, making it difficult to assess whether the improvements in the results are related to the text. For instance, in Figure 4, the first three scenes show only an improvement in contrast, making it hard to see any other differences. In the second-to-last column, the letter "g" is clearly reconstructed incorrectly. What caused this? It's impossible to determine whether this is related to the text prompt.
Regarding the source of the text prompts for the events, the authors mentioned in their response to me that "These can be easily provided by humans in the absence of APS frames." However, in their response to another reviewer, they stated that "event-based multi-modality models such as EventBind[77] and EventClip[78] can also be employed to generate text descriptions directly from the event stream." For the former, I cannot imagine whether a person would be watching the events to provide a text description or watching the scene to do so. It seems that if a person can directly observe the scene, then reconstruction wouldn't be necessary. If a person is observing the events, the sparsity of the events makes it difficult to provide an accurate description. For the latter, if event-based multi-modality models are used, these models would require the presence of events to function. However, the authors emphasize that the text is meant to enhance areas without events. Can these models provide accurate descriptions for sparse event regions in the absence of events?
As this paper focused on event reconstruction with text as additional guidance, I believe it is essential to thoroughly explore the motivation and role of the text.
---
Reply to Comment 1.1.1:
Title: Author response to Reviewer ueNs (2/2)
Comment: > **Regarding the source of the text prompts for the events**
>
- Our framework primarily focuses on providing a potential pipeline for reconstructing videos from events with a **language-guided perspective**. This is based on our finding that language naturally conveys abundant semantic information, which enhances the semantic consistency of the reconstructed video (see Lines 39-41). Such text guidance manner is **similar** to text-guided denoising [80,81] and super-resolution [82] methods, which also rely on natural language as a user-friendly interface to control the image restoration process. In this work, we do not focus much on the specific sources of the text prompts, as these can be flexible and generic, varying depending on the application scenarios. For example, if a DAVIS346 camera is used, obtaining text from APS frames is convenient. If a Prophesee camera is used and only event data is available, it is feasible to provide tagging-based text prompts derived from event-based multi-modality models. Additionally, in complex scenes lacking sufficient event data, it is practical to incorporate human intervention in an interactive manner to support human control. Overall, our method provides a language-guided interface for E2V reconstruction that could be **feasible** and **generic**.
- The reviewer might misunderstand the significance of the text prompts. The text is **not only** used to reconstruct regions without events. For regions with event data, our method incorporates text to provide complementary semantic information, ensuring semantic consistency and further improving performance. For regions without event data, our method can rely on the text to reconstruct a reasonable scene close to the distribution of the real images. The results in Tab. 1 and Tab A-6 have demonstrated the superiority of our method in both scenarios.
- Regarding event-based multi-modality models, they offer an option for providing semantic information, which can be explored in future work. Since these models are trained with large-scale event-image paired data, they have the potential to provide prior semantic information that is absent in event data. Based on these text prompts, the method reconstructs video from event data, while effectively exploiting the semantic prior in the diffusion model to ensure semantic consistency.
Thanks for the insightful question! We will include this discussion in the camera-ready version.
---
Rebuttal 2:
Title: Author response to Reviewer ueNs (1/2)
Comment: Thanks for bringing up this thoughtful discussion.
> **About the hallucinations issue**
>
- We respectfully disagree with the reviewer. Please note that all reconstructed results are **not unreliable**. In regions with sufficient event data, our method enhances performance with text guidance while ensuring fidelity. As demonstrated in Tab. 1 and Fig. 4, our method significantly **outperforms** previous methods. We also encourage the reviewer to recheck the demo video provided in our supplementary materials. For regions with insufficient event data, our method relies primarily on text prompts to reconstruct a scene that aligns with human preferences. As demonstrated in Fig. 1, HyperE2VID struggles with **severe artifacts** (haze-like artifacts), typically in the background regions, far from the true distribution of real images. Whereas, our method exhibits **higher-quality** reconstruction results that are closer to the ground truth. The results indicate that our method can subtly leverage the semantic information from language and thus ensure semantic consistency of the reconstructed video. To further demonstrate the effectiveness of our method, we provide a quantitative comparison of the HS-ERGB dataset, which includes larger regions lacking event data, as shown in Tab. A-6. Our method substantially **outperforms** baseline methods with an MSE of 0.083. Apparently, the quantitative results verify the superiority and reliability of our method.
- In this work, we mainly focus on a new research direction of exploring E2V reconstruction from a language-guided perspective, as affirmed by other reviewers: "This approach is expected to "*inspire future work*" (***ueNs***) and "*provide new ideas*" (***uWG8***). We did not focus much on the computation efficiency of the diffusion model, which will be left as a future work. Please be noted that using language guidance does not actually hamper the application value. Such a text-guided manner is also similar to recent methods used in text-guided denoising [80,81] and super-resolution [82], which have provided broad applications for low-level vision in a user-friendly manner. Our research is **application-significant** as the language provides abundant semantic information, beneficial for ensuring the **semantic consistency** of the reconstructed video. The text prompts also provide a way of human intervention to control the reconstruction process.
> **What is the significance of the text in this model?**
>
- The text information serves as a **crucial** component to **activate** the semantic prior in the diffusion model. As clarified above, our method effectively enhances performance in regions **both** with sufficient and insufficient event data by utilizing text prompts. To simplify text prompts and enhance the model's robustness, we utilize only coarse, tagging-style descriptions for training. Without text prompts, it is challenging to exploit the semantic prior in the diffusion model to ensure semantic consistency. As shown in Tab. 3 (1st row), without the text prompt, our method yields modest results. However, when text information is incorporated, it significantly **improves** the model's ability to leverage the semantic prior in the diffusion model, thereby enhancing performance.
> **Text-related parts of the experiments**
>
- For the qualitative comparison in Fig. 4, we respectfully suggest the reviewer recheck the supplementary **video** material. It shows not only an improvement in contrast. The previous methods tend to reconstruct misty images with distinct artifacts, especially in the region with insufficient event data. Our method effectively exploits the semantic prior of the diffusion model with the guidance of text prompts, which demonstrates semantic consistency. Additionally, Fig. 1 shows another extreme case. While the previous method (i.e., HyperE2VID) fails in the regions without event data, our method still reconstructs a reasonable scene according to the text guidance, which is closer to the ground truth. Tab. A-6 also shows the quantitative comparison in this dataset (i.e., HS-ERGB) and demonstrates the superiority of our method.
- Regarding the artifacts in the letter “g”, as also noted by reviewer ***uWG8***, these issues arise because our method relies on the prior from the pre-trained diffusion model (SD2 [42]), which also struggles with text rendering. However, the recently released SD3 [79] claims to have improved text rendering capabilities, which could potentially address this problem and improve the performance. | Summary: This paper uses abundant semantic information and raw event information to guide the reconstruction of RGB images from event images based on U-Net. Furthermore, this paper introduces event-aware mask loss to ensure temporal coherence and a noise initialization strategy to enhance spatial consistency. Experiments demonstrate that the proposed algorithm has a strong reconstruction performance.
Strengths: 1. Constructing ESA utilizes abundant semantic information and raw events to construct the cross attention with the combination of frames and raw events separately to guide event image reconstruction to RGB image.
2. The mask loss is constructed to supervise the reconstructed image from the temporal dimension strongly.
3. Extensive experiments on three datasets covering diverse challenging scenarios (e.g., fast motion, low light) demonstrate the superiority of this method.
Weaknesses: 1. In the ESA module, two modalities of information, raw events and text were used for cross-attention with frames. Still, no ablation experiments were given, which makes it impossible to determine whether the final experimental results of this work are more useful for raw events or text. In particular, raw events were inserted in each U-Net section.
2. In this paper, there are no reported flops as well as parameters, especially relative to some of the second-best methods in Table 1, what is the approximate percent increase in the two parameters?
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. What are the tagging models mentioned in the paper? There are no specifics and no ablation experiments.
2. The comparison methods in the references are mostly from 2020 and 2021, with only one method from 2024.
3. Other methods are compared on SSIM metrics. Why not give SSIM metrics instead of SSIM* metrics?
4. How do you define spatial consistency? Adding ‘Noise Initialization’ only in the testing phase did not enhance the consistency of the model itself in the spatial dimension.
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: See Weaknesses. The final rating will be made based on the rebuttal.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank the reviewer for valuable comments and suggestions. We address the concerns below:
> **About ablation experiments for ESA on raw events and text.**
>
- The reviewer might misunderstand the ESA module. Indeed, the ESA and the text are **separate** parts in our framework. As outlined in Sec. 4.2 (Line 157-183), the ESA module consists of event spatial attention and event temporal attention layers, which compute attention based on the hidden state $z$ and event feature $z_e$, respectively. This way, ESA module facilitates spatio-temporal consistency between event data and video. The text input is integrated separately via a cross-attention layer that follows the original LDM [42], and it is not involved in the ESA module. Since event input is essential for the E2V reconstruction task, we have already provided ablation studies on both ESA and text with event input to demonstrate their individual effectiveness. Hereby, we reiterate the results as follows:
- To demonstrate the impact of text description, the ablation study in Tab. 2 (1st row vs. 3rd row) and Fig. 7 (left) compare the E2V results with (w) and without (w/o) text. Please check them.
- Tab. 3 (1st row) compares the performance of the baseline trained with simple channel-wise concatenation of events, without the ESA module, showing a significant performance drop. This ablation result confirms the effectiveness of ESA module.
> **About FLOPs and parameters comparison.**
>
- Thanks for the suggestion. Please refer to **Global Response to All Reviewers**.
> **About the tagging models and the corresponding ablation experiments.**
>
- In Line 216, we employ the off-the-shelf tagging model RAM [61], which serves as a prompting model to provide text descriptions for the datasets. In fact, a recent work (SeeSR [70]) has also demonstrated the superiority of RAM compared to other models because of **rich objects and concise description**.
- As suggested, we also tested BLIP[71] on a sampled sequence (i.e., *boxes* in HQF) to further evaluate the influence of the prompting model, as shown in Tab. A-2. BLIP can generate reasonable text prompts in caption-style and show **nearly reconstructed performance** on MSE (0.025 vs 0.027). A detailed discussion will be provided in the camera-ready revision.
Table A-2: Comparisons between different prompting models on *boxes* of HQF.
| Models | MSE | SSIM | LPIPS |
| --- | --- | --- | --- |
| RAM [61] | 0.025 | 0.557 | 0.196 |
| BLIP [71] | 0.027 | 0.546 | 0.207 |
> **Additional comparison methods.**
>
- We mainly compared with the latest state-of-the-art method, e.g., HyperE2VID (TIP 2024). By default, our method is superior to the methods published in 2022 and 2023. However, upon the suggestion, we have provided more comparison methods in Tab. A-3, which further demonstrates the superior performance of our method. We will update Tab.1 of main paper in the camera-ready revision.
Table A-3: Comparison with more event-to-video methods.
| Methods | | ECD | | | MVSEC | | | HQF | |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| | MSE | SSIM | LPIPS | MSE | SSIM | LPIPS | MSE | SSIM | LPIPS |
| Zhang et.al. (TPAMI2022) [72] | 0.076 | 0.519 | 0.457 | - | - | - | - | - | - |
| EVSNN (CVPR2022) [73] | 0.061 | 0.570 | 0.362 | 0.104 | 0.389 | 0.538 | 0.086 | 0.482 | 0.433 |
| PA-EVSNN (CVPR2022) [73] | 0.046 | 0.626 | 0.367 | 0.107 | **0.403** | 0.566 | 0.061 | 0.532 | 0.416 |
| CISTA-LSTC (TPAMI2023) [74] | 0.038 | 0.585 | 0.229 | - | - | - | 0.041 | 0.563 | 0.271 |
| CISTA-Flow (Arxiv2024) [75] | 0.047 | 0.586 | 0.225 | - | - | - | 0.034 | **0.590** | 0.257 |
| HyperE2VID (TIP2024) [12] | 0.033 | 0.576 | 0.212 | 0.076 | 0.315 | 0.476 | **0.031** | 0.531 | 0.257 |
| LaSe-E2V (OUrs) | **0.023** | **0.629** | **0.194** | **0.055** | 0.342 | **0.461** | 0.034 | 0.548 | **0.254** |
> **About SSIM and SSIM\* metric.**
>
- The SSIM metric raises ambiguity because it involves several hyper-parameters that may differ across various codebases. For example, in the `structural_similarity` function of the **skimage** package, parameters like `gaussian_weights` and `sigma` are used for spatial weighting of each patch with a Gaussian kernel, `use_sample_covariance` indicates whether to normalize covariances, and `K1` and `K2` are algorithm-specific parameters that need to be set. For this reason, we **reevaluated** all comparison methods by using a unified metric, denoted as the SSIM* scores. We will further clarify this point in the revision.
> **How to define spatial consistency and how does noise initialization enhance the consistency?**
>
- Spatial consistency denotes the consistency between event data and the reconstructed video. Specifically, events typically occur at the **edges/texture** part of the scene context, and the reconstructed video needs to align the event data in the spatial structure.
- Noise Initialization mainly focuses on mitigating the **train-test gap** during inference, which is a common challenge in diffusion models. Intuitively, accumulated event data provides structural information (e.g. edges) in the scene, acting as an **additional constraint** during the denoising process.
**Additional Reference:**
[71] Li J, Li D, Savarese S, et al. Blip-2: Bootstrapping language-image pre-training with frozen image encoders and large language models[C]//ICML, 2023.
[72] Zhang Z, Yezzi A J, Gallego G. Formulating event-based image reconstruction as a linear inverse problem with deep regularization using optical flow[J]. TPAMI, 2022, 45(7): 8372-8389.
[73] Zhu L, Wang X, Chang Y, et al. Event-based video reconstruction via potential-assisted spiking neural network[C]//CVPR. 2022.
[74] Liu S, Dragotti P L. Sensing diversity and sparsity models for event generation and video reconstruction from events[J]. TPAMI, 2023, 45(10): 12444-12458.
[75] Liu S, Dragotti P L. Enhanced Event-Based Video Reconstruction with Motion Compensation[J]. arXiv, 2024. | Rebuttal 1:
Rebuttal: ### **Global Response to All Reviewers**
We sincerely thank the reviewers for their constructive feedback. We are pleased that the reviewers found our **method to be novel and effective**, the **performance to be strong and with high-quality**, which is believed to **inspire future work** in the community. Below, we address the questions raised and promise to throughly revise the paper accordingly.
> **Complexity analysis**
>
- Following the reviewers' suggestions, we have provided a detailed computational complexity analysis in Tab. A-1, which includes recent event-to-video reconstruction methods and related diffusion-based approaches. Our method requires a significant amount of inference time due to the 50 denoising steps, in contrast to previous single-step event-to-video methods. However, we kindly note that this is a common challenge for all diffusion-based models, as observed in diffusion-based super-resolution [69,70] and depth estimation methods [66,67]. To reduce inference time, some research, eg., [65], already explored to decrease the number of denoising steps, offering a promising direction for further improvement (as a future work) to our framework.
- Please be noted that previous approaches [54, 12] often struggle to recover regions *without active events*. In contrast, as demonstrated in Fig.1, our method achieves **holistic, semantic-aware reconstruction**. As acknowledged by reviewers ***ueNs*** and ***uWG8***, this paper aims to *"**provide new ideas for subsequent works**"* and *"**inspire future work in the field**"* by incorporating language guidance.
- As affirmed by reviewer ***H8fU***, this paper demonstrates that "***it is possible to use diffusion-based models for the event-based video reconstruction task***," thereby leveraging the rich semantic priors of large-scale LDM. This approach holds promise for extending to other event-based tasks, including video frame interpolation, deblurring, and denoising.
In summary, albeit with lower inference speed than conventional E2V methods (but higher than diffusion-based super-resolution and depth estimation methods), our work brings new ideas and may hopefully inspire new future research for event-based vision.
Table A-1: Complexity comparison on various methods. All tests are conducted on one NVIDIA Tesla 32G-V100 GPU.
| Methods | | Parameters | Inference time (per frame) |
| --- | --- | --- | --- |
| Conventional Event-to-Video | ET-Net [54] | 22.18M | 0.0124s |
| | HyperE2VID [12] | 10.15M | 0.0043s |
| Diffusion-based Depth Estimation | DepthFM [66] | 891M | 2.1s |
| | Marigold [67] | 948M | 5.2s |
| Diffusion-based Super-Resolution | StableSR [68] | 1409M | 18.70s |
| | PASD [69] | 1900M | 6.07s |
| | SeeSR [70] | 2284M | 7.24s |
| Diffusion-based Event-to-Video | **LaSe-E2V (Ours)** | 1801M | 1.09s |
**Additional Reference**:
[65] Liu X, Zhang X, Ma J, et al. Instaflow: One step is enough for high-quality diffusion-based text-to-image generation[C]//ICLR. 2023.
[66] Gui M, Fischer J S, Prestel U, et al. Depthfm: Fast monocular depth estimation with flow matching[J]. arXiv, 2024.
[67] Ke B, Obukhov A, Huang S, et al. Repurposing diffusion-based image generators for monocular depth estimation[C]//CVPR. 2024.
[68] Wang J, Yue Z, Zhou S, et al. Exploiting diffusion prior for real-world image super-resolution[J]. IJCV, 2024: 1-21.
[69] Yang T, Ren P, Xie X, et al. Pixel-aware stable diffusion for realistic image super-resolution and personalized stylization[J]. arXiv, 2023.
[70] Wu R, Yang T, Sun L, et al. Seesr: Towards semantics-aware real-world image super-resolution[C]//CVPR. 2024. | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
HyperPrism: An Adaptive Non-linear Aggregation Framework for Distributed Machine Learning over Non-IID Data and Time-varying Communication Links | Accept (poster) | Summary: Traditional DML methods are limited to (1) data heterogeneity and (2) time-varying communication links. In this work, they present a non-linear class aggregation framework HyperPrism that leverages Kolmogorov Means to conduct distributed mirror descent with the averaging occuring within the mirror descent dual space. The proposed method can improve the convergence speed up to 98.63% and scale well to more devices compared with the state-of-the-art, all with little additional computation overhead compared to traditional linear aggregation.
Strengths: 1. The speedup of convergence is satisfactory.
2. The theoretical discussion is sufficient.
Weaknesses: 1. The authors claim that they use hypernetworks to predict p (since they use softmax, type might be int), the exponent of the mapping function. There is a little deviation from the common practice of hypernetworks, where they usually output more complex parameter weights, like the weight of classifiers (a 768 * 100 matrix, type is float). I would prefer the authors remove the part of hypernetworks, and simply introduce HN as a simple MLP.
2. While HNs act as a crucial part of HyperPrism, the analyses of them seem limited. Since HN has softmax layers, I would be curious about (1) the value set of p and its influence, (2) the variance of p on each distributed machines when given different gradients, (3) the results of preset p (≠ 1) (like directly use the most outputted p of HNs rather than 1). I would suggest these ablation studies be added.
3. While the performance of HyperPrism is excellent on the major experiments, the baselines used for comparison seems out of date (before 2021), it would be more reasonable if the authors could provide more recent baselines (in 2022 or 2023).
4. The results of the experiments are a bit confusing, especially in terms of improvement. For example, in the last column of Table 1, Conv Rds of the proposed method is 13, while the best baseline is 14, how is 85.86% computed ? It is computed with the least performed baseline? It would be better if the authors could provide detailed explanations.
Technical Quality: 2
Clarity: 3
Questions for Authors: See weaknesses
Confidence: 4
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: N.A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: R3-1: The difference between Hypernetwork and MLP.
Response: You make an excellent point about HN and MLP. The HN itself is basically a simple MLP. However, in terms of parameter selection, there are fundamental differences between the two. First, the MLP learns a direct mapping from the determined input to the target values, treating the selection process as a prediction task. While HN models the parameter selection process as a learning problem. It is trained as a meta-model to predict the optimal parameters, based on an embedding representation of the local models. Second, the MLP only optimizes the model itself to find the best set of model parameters. In contrast, HN allows the model and the
model embedding to be jointly optimized. The embedding vector has also been updated to better ”represent” the local models. So regardless of whether the HN outputs integers or matrices, the optimization process is quite distinct from simply optimizing an MLP. Moreover, the HN gradients are calculated based on the gradients of the local models (as shown in Equation 9), giving HyperPrism a better ability to capture the relationships between the loss function and the optimal P. Recent works such as [R1; R2] also considered similar HN-based optimization approaches, showing the potential benefits over MLP-based parameter selection.
R3-2: The influence of P.
Response: Due to the space limitations, some of the ablation studies are only presented in the Appendix. In Section 8.2 (lines 437-444), we use the preset fixed P to study how various P values (${P \in \mathbb{N} \mid 1 \le P \le 21}$) impact the model performance. The experimental results clearly emphasize the importance of choosing the appropriate $P$. This insight also inspires us to jointly optimize the selection of $P$ along with the performance of the model, and adaptively select the optimal $P$ during the training process.
R3-3: The baselines seem out of date.
Response: Although existing baseline models may not be the most recent, they are still recognized as powerful and effective methods in extreme scenarios of data heterogeneity and time-varying communication links, and have been used as baselines in recent works[R3]. We choose these baselines to ensure that our method can be fairly compared with industry standards and highlight the innovativeness and advantages of our approach in utilizing non-linear aggregation to accelerate DML training. We believe these innovations can bring new
insight into the DML domain, and we will seriously consider exploring more novel baseline models in future work.
R3-4: The confusion improvement in experimental results.
Response: We apologize for any confusion caused by the performance comparisons in the results. The percentage improvements reported are all compared to the D-PSGD method, which is one of the most influential and widely applied works in DML studies. It is important to note that our proposed method not only demonstrated superior performance over D-PSGD, but also outperformed more recently published methods like ADOM and Mudag with convergence accuracy and convergence speed improvements of up
to 4.87% and 86.36%, respectively, still demonstrating significant advantage in convergence speed. This underscores the significance and contributions of our work, as it advances brand-new insights in this research area.
[R1] Xiaosong Ma, Jie Zhang, Song Guo, and Wenchao Xu. Layer-wised model aggregation for personalized federated learning. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 10092–10101, 2022.
[R2] Aviv Shamsian, Aviv Navon, Ethan Fetaya, and Gal Chechik. Personalized federated learning using hypernetworks. In International Conference on Machine Learning, pages 9489–9502. PMLR, 2021.
[R3] Adel Nabli and Edouard Oyallon. Dadao: Decoupled accelerated decentralized asynchronous optimization. In International Conference on Machine Learning, pages 25604–25626. PMLR, 2023. | Summary: This paper addresses challenges in distributed machine learning (DML) caused by non-IID data and
unstable communication links. The proposed HyperPrism framework uses mirror descent and adaptive
mapping to project models into a mirror space for better aggregation and gradient steps.
It employs adaptive weighted power means (WPM) for efficient model aggregation, significantly
improving convergence speed and scalability. The framework's analytical results show that HyperPrism
effectively handles decentralized DML challenges, offering a robust solution for edge device data processing.
Strengths: a) The paper exhibits notable originality by addressing the dual challenges of non-IID data and time-varying
communication links in distributed machine learning (DML). The HyperPrism framework introduces a novel
combination of mirror descent and adaptive weighted power means (WPM) for effective model aggregation.
b) The quality of the research is high, demonstrated through rigorous theoretical analysis and comprehensive
experimental validation, providing strong support for the framework's claims.
c) The clarity of the paper is commendable, with well-structured explanations and supportive figures and tables,
although some dense sections could benefit from further simplification.
d) The significance of the work is substantial, as it offers a robust solution to a pressing issue in DML,
with implications for improving the efficiency and scalability of edge device data processing. The results
showing enhanced convergence speed and scalability are valuable contributions to the field.
Weaknesses: While the paper is strong overall, there are a few areas that could be improved.
First, some sections are densely packed with technical details, which might be challenging for readers
not deeply familiar with the subject. Simplifying these sections or providing additional explanations
could enhance accessibility.
In addition, the paper could provide more comparative analysis with existing methods to highlight the
specific advantages and potential limitations of HyperPrism. However, due to the limited period of rebuttal,
it is just optional and not necessary.
Lastly, a more detailed discussion on the computational overhead and scalability of the proposed framework
in extremely large-scale settings would be beneficial. Addressing these weaknesses would strengthen the paper
and its contributions.
Technical Quality: 4
Clarity: 3
Questions for Authors: How does HyperPrism perform compared to other non-linear aggregation frameworks or adaptive learning methods in similar scenarios?
Are there specific benchmarks or datasets where it excels or underperforms?
Confidence: 4
Soundness: 4
Presentation: 3
Contribution: 3
Limitations: The potential limitation of the proposed method is not discussed in this manuscript. It will be of great importance if the authors can give an insight to the potential readers by comparing the pro/cons of the proposed method compared to the latest works in this topic.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: R2-1: Some sections are densely packed with technical details, which might be challenging for readers.
Response: To the best of our knowledge, HyperPrism is the first non-linear aggregation DML framework that combines mirror gradient descent and hypernetwork techniques. Therefore, a comprehensive explanation of the technical and theoretical details would be necessary to help readers clearly understand our approach. We will make our best efforts to provide concise overviews and technical explanations in the revised version to increase readability.
R2-2: Discussion on the computational overhead and extremely large-scale settings.
Response: We have conducted some experiments to compare the computational time, as presented in Section 8.3 of the Appendix (lines 446-450). The results show that HyperPrism requires more time per training round than the baseline methods. However, it significantly reduces the total number of rounds required for convergence, thereby shortening the overall time to reach a specific accuracy target. Regarding extremely large-scale settings, we conducted experiments with device sizes of {20, 50, 100} and the results show that HyperPrism is minimally affected by the number of devices and maintains excellent acceleration and model performance. Theoretically, larger scale settings will still maintain this robustness to the growth in the number of devices. Please refer to Table 3 and Section 8.9 for more details.
R2-3: How does HyperPrism perform compared to other non-linear aggregation frameworks or adaptive learning methods in similar scenarios?
Response: We are the first step in this new non-linear aggregation field, meaning there is almost no similar approach available for comparison. However, the non-linear aggregation mechanism in HyperPrism is performed via a specific mapping function $\phi(w) = \frac1{p+1} \lVert w\rVert^{p+1}$. According to the theory of mirror descent, it can be replaced by any convex and smooth function. This observation opens up an interesting direction for our future work, we plan to explore the use of other unique mapping functions for nonlinear aggregation, aiming to further enhance the performance and capabilities of DML systems.
---
Rebuttal Comment 1.1:
Comment: The responses seem reasonable, so I'll stick with my original rating.
---
Reply to Comment 1.1.1:
Title: Thank you very much for your recognition.
Comment: Thank you very much for your recognition. | Summary: This paper presents HyperPrism, a novel framework for decentralized machine learning (DML) that aims to address the challenges of non-IID data and time-varying communication links. The authors propose a non-linear aggregation method based on Kolmogorov Means and adaptive mapping functions, which they argue improves convergence speed and scalability compared to traditional linear aggregation methods. The paper includes theoretical analysis and experimental results to support their claims.
Strengths: • **Novelty:** The use of non-linear aggregation with adaptive mapping functions is a novel approach to simultaneously address the challenges of non-IID data and time-varying communication in DML.
• **Theoretical Analysis:** The paper provides a theoretical analysis of the convergence behavior of their proposed approach, which is a valuable contribution.
• **Experimental Results:** The experimental results demonstrate somewhat promising improvements in convergence speed and scalability compared to few baseline methods.
Weaknesses: • **Experimental Setup:** The experimental setup could be expanded to include more diverse datasets, models, and settings. The current experiments are limited to MNIST and CIFAR-10 with specific model architectures and the non-IID setting include only two extreme cases ($\alpha=0.1$ and $\alpha=10$). This would help to assess the generalizability of HyperPrism's performance improvements.
• **Clarity:** The paper could benefit from improved clarity in some sections. For example, in section 3 the authors discussed HyperPrism without first introducing it. The motivation for using mirror descent and Kolmogorov Means was unclear. The connection between these concepts and the challenges of non-IID data and time-varying communication could be made more explicit.
• **Hyperparameter Tuning:** The paper does not provide sufficient details on how the hyperparameters for the different methods were chosen. This makes it difficult to assess the fairness of the comparison.
**Others minor issues:**
• There seems to be a missing index $i$ on $w$ in equation 1.
• Heterogeneity in referencing in related work section. Some references seem to be typed manually (or at least they are not connected to any entry in bibliography).
Technical Quality: 3
Clarity: 3
Questions for Authors: See weaknesses.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: Overall, the paper addresses an important problem in DML and proposes a novel solution with promising theoretical and empirical results. However, the experimental setup and clarity issues mentioned above make it just fall short of this venue’s bar.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: R1-1: Experiments are limited to MNIST and CIFAR-10 and the non-IID setting includes only two extreme cases.
Response: The MNIST and CIFAR-10 are the most commonly used datasets in the DML field, and recent works such as [R1] have also chosen these datasets for verification. Moreover, due to space limitations, some experimental results have not been presented in figures, including various Non-IID settings (α = {0.1, 1, 10}), topology density, number of devices, etc. The proposed HyperPrism demonstrates superiority in both convergence speed and accuracy under various settings. Please refer to Tables 1, 2, and 3 in the experimental
section for details (lines 281).
R1-2: The motivation for using mirror descent and Kolmogorov Means.
Response: Model Merging is a growing field, which in recent years has developed nonlinear aggregation methods of their own. This constitutes the ultra-low update frequency update solution, with one update post-training. For ultra-high frequency updates, i.e., once per gradient, linear aggregation is optimal. However, for this vast swathe of space between these frequencies, other solutions should emerge. One motivation for using nonlinear aggregation is the decreased variance in the original parameter space. Our system is designed based on the thesis that combining gradients from different models is dangerous if the gradient is computed at vastly different sets of parameters. Thus, the main problem is synchronizing these sets of parameters. Kolmogorov Means let you tune this, for example, with weighted power means of $p=11$, all parameters go immediately to values near the maximum value 25 (by calculating). Meanwhile, in DML, the primary challenges are data heterogeneity and time-varying communication links. Traditional linear aggregation struggles to address the model divergence stemming from these issues, which hurts performance. The proposed HyperPrism maps models to a dual domain to better align with the geometry of the objective function. It introduces a specific mapping function, $\phi(w) = \frac{1}{p+1} \lVert w\rVert^{p+1}$, transforming models as $w \rightarrow w^p$. Then, the Kolmogorov Means method is applied to achieve nonlinear aggregation in the form of Weighted Power Mean (WPM), enabling HyperPrism to capture a broader array of features, thus making it particularly suitable for scenarios with data heterogeneity and time-varying communication links. We illustrate how HyperPrism leverages WPM to facilitate more efficient aggregation in the Appendix; please refer to section 8.4 for details (lines 451-459).
R1-3: How the hyperparameters for the different methods were chosen to ensure fairness.
Response: We use the same basic hyperparameters, such as model structure, learning rate, batch size, optimizer, etc., for all baselines. For methods with unique hyperparameters, we also made targeted adjustments to ensure a fair comparison between all methods. Details are presented in the Baselines section (lines 266-275).
[R1] Martijn De Vos, Sadegh Farhadkhani, Rachid Guerraoui, Anne-Marie Kermarrec, Rafael Pires, and Rishi Sharma. Epidemic learning: Boosting decentralized learning with randomized communication. Advances in Neural Information Processing Systems, 36, 2024.
---
Rebuttal Comment 1.1:
Comment: Thank you for the detailed response. I acknowledge that I missed some of the additional settings presented in table 1. Therefore I am willing to increase my score to 5. I will not go beyond score of 5 because I still find the diversity of the experiments limited (e.g. datasets, models, lack of recent baselines).
PS: could you also address the minor issues I mentioned in my review?
---
Rebuttal 2:
Comment: Thank you very much for your valuable suggestions and acknowledgment. It is essential to highlight that our method is specifically designed for decentralized machine learning scenarios with strong data heterogeneity and time-varying communication networks. Excessively complex large datasets often hardly make achieving global convergence extremely challenging, even non-convergence. This is why existing studies in this field predominantly use the MNIST and CIFAER10 datasets for evaluation, e.g., [R1, R2]. We also actively explore implementing our method across a broader spectrum of scenarios.
The responses to the two above minor issues are as follows:
1) We confirm no additional index 'i' is needed in Equation (1). The 'w' of Equation (1) denotes the aggregated local model from neighbors' devices, represented as $f_i(w)=\mathbb{E}_{\zeta_i\sim D_i} [\mathcal{F}(w_i;\zeta_i, G(t))]$ (Please refer to Line 109, Page 3). The notation in Equation (1) then represents selecting a single model which optimizes the function F(w) = sum of f_i(w).
2) We confirm that our manuscript is generated using the NeurIPS 2024 LaTeX template, and manual reference typing is not applicable in this context. There may be some formatting issues during the PDF conversion process that lead to some references not being linked to the bibliography.
[R1] Vogels T, He L, Koloskova A, et al. Relaysum for decentralized deep learning on heterogeneous data. Advances in Neural Information Processing Systems, 2021, 34: 28004-28015.
[R2] Le Bars B, Bellet A, Tommasi M, et al. Refined convergence and topology learning for decentralized SGD with heterogeneous data. International Conference on Artificial Intelligence and Statistics. PMLR, 2023: 1672-1702.
Title: Thank you very much for your recognition! | null | null | Rebuttal 1:
Rebuttal: We thank all reviewers for their careful reviews and constructive suggestions. Our responses to the main issues are below: | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Low Degree Hardness for Broadcasting on Trees | Accept (poster) | Summary: The authors study hardness of broadcasting on trees for low degree polynomials. The main result shows that log(N) degree polynomials of the leaf values have vanishing correlation with the root, resolving the main open question of Kohler and Mossel (NeurIPS 2022).
The tree broadcasting problem consists of a $d$-ary depth $\ell$ tree T (or some relaxation thereof), an ergodic Markov chain $M$ over state space $[q]$, and a starting root distribution $\nu$ over $[q]$. The broadcasting process starts by drawing $j \sim \nu$, then `broadcasts’ a value to every child in the tree based on the transition matrix $M$. This is repeated $\ell$ times until the process terminates when it hits all leaves. Roughly speaking, the broadcasting problem asks whether it is possible to infer the starting value at the root given the values at the leaves of the tree given by the above process.
The most basic algorithm for tree broadcasting looks at linear statistics of the leaves, e.g. count statistics of how many times each state in [q] appears. There is a well-known threshold for linear algorithms called the Kesten-Stigum bound which states that such inference is possible if and only if $d\lambda^2 < 1$, where $\lambda$ is the 2nd largest eigenvalue of $M$. It is a natural (and known open) question whether this bound continues to hold against *low degree polynomials* of the leaves. This work answers this question in the positive for polynomials up to degree $\log(|T|)$: all such functions have vanishing correlation with the root.
Strengths: Broadcasting is a well studied and useful model. The question of how well low-degree statistics of the leaves correlate with the root is extremely natural mathematically, and is supported by a number of recent works showing the general power of low-degree polynomials in predicting statistical-computational gaps. The only known bounds for this general problem were for the very specialized $\lambda=0$ case.
The techniques introduced by the authors, including the notion of fractal capacity (a more involved proxy for low degree functions that aids their proof), and the resulting inductive method for low-degree (low fractal capacity) functions may be of independent interest. As the authors note, it is one of the first methods for dealing with strongly non-product structures in this context, which is a very frequent barrier throughout related areas (e.g. analysis of Markov chains, boolean function analysis, etc).
The authors give a nice overview of their method in the symmetric linear case that is easy to follow (though a bit dense with the notions of fractal capacity introduced before this overview). Moving to the low-degree case requires highly non-trivial modifications of this method, but the intuition given there is very helpful.
Weaknesses: I do not see any substantial weaknesses in this work.
One could of course ask for lower bounds against super-logarithmic degree polynomials as in known in the $\lambda=0$ case, but the results in this paper still mark a major step forward on this problem.
I would request the authors run a grammar/syntax check. There are a huge number of typographical/syntax errors that slow down the reading of the work to some extent.
Technical Quality: 4
Clarity: 3
Questions for Authors: N/A
Confidence: 2
Soundness: 4
Presentation: 3
Contribution: 4
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | null | Summary: The paper shows that for a Markov process in a tree, propagating a starting state at the root to the leaves it is "hard" to infer the starting state from the leaf states.
Concretely they show that for any function $f$ of the leaf states with bounded Efron-Stein degree the variance of $f$ conditioned on the root state is bounded by a function of the total variance of $f$. That is, the output of such a function varies relatively little with respect to the choice of the root value which means that little can be inferred about the root value from the leaf values.
In particular since bounded degree polynomials have bounded Efron-Stein degree this implies that any low degree polynomial of the leaf values can not be correlated with the root, i.e. as tree depth goes to infinity the correlation goes to zero.This holds even for cases where it would be possible to recover some information about the root, so the result really does imply some bound on the power of low-degree functions.
Strengths: The paper shows a very nice result, and I feel like the overview in the main body gives a good idea of how the proof of the result proceeds.
Weaknesses: There are no real weaknesses to the paper, however due to space constraints it is not really possible to ascertain the claimed results from the main body of the paper. I could not check the appendix carefully.
Technical Quality: 4
Clarity: 4
Questions for Authors: - p2,l60 there is a broken reference
- On page 6 the application of the CS-inequality has a typo in the final term, it should be $i\in [m]$ below the sum
- p7,l203: of in a Markov Chain -> of a / in a
- p8,l229 there is a missing ")" before the first square
Confidence: 3
Soundness: 4
Presentation: 4
Contribution: 3
Limitations: No Limitations, this is theoretical research first and foremost
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | null | Summary: This paper obtained low degree hardness results for broadcasting on trees (BOT).
The BOT problem is as follows.
Consider a rooted tree and an associated Markov process defined by a transition matrix $M$.
The process is initialized by a distribution on the root.
Then for each vertex with state $s$, its children are i.i.d. according to the $s$-th row of $M$.
The goal is to infer the state of the root given all leaves.
This paper shows that below the Kesten--Stigum threshold, any polynomial of the leaves of degree at most $\log N$ fails to recover the root state.
This is particularly interesting since for some channels the KS threshold is information theoretically sharp and therefore they exhibit a stat-comp gap.
Strengths: NA
Weaknesses: NA
Technical Quality: 3
Clarity: 1
Questions for Authors: I'm not familiar with this literature but I did try to check some of the cited papers to get a sense.
The result of this paper is certainly above the bar.
However, there's (big) room for improvement of the presentation.
In fact, given the relevance of BOT and low-deg, upon proper revision of the exposition, the authors may want to send this paper to a journal instead.
Major comments:
1. A related work section at the level of technique (i.e., low-deg) is needed.
To balance the space, the authors may want to reduce some parts of Sec 1.1 which are quite standard.
1. Do the notions of $\mathcal{B}(\mathcal{A})$ and fractal capacity exist in the literature, or did the authors introduce them?
It's better to make it clear since the mere definitions of these are a nontrivial contribution IMO.
Minor comments:
1. Line 9, what do you mean by "high level of complexity"? It was just mentioned that BP runs in linear time.
1. Line 10, at this point, it's not even clear what "chain" means. Please try to make the abstract self-contained.
1. Line 20, based on.
1. Line 22 doesn't parse grammatically.
1. Line 22-25, please provide references.
1. I don't understand line 37. Why compare low-deg **algorithms** to SoS **lower bounds**?
Also, what does "easy to use" mean?
1. Line 40, [36] doesn't seem to involve AMP. Do you mean https://arxiv.org/abs/2212.06996 ?
1. Line 42, "broadcast on trees" or "broadcasting on trees"? Please unify the terminology.
1. Line 47, What do you mean by "a linear estimator in the number of leaves"?
The estimator is a linear function of the leaves?
Or it runs in linear time as a function of the number of leaves?
1. Line 54, our --> out
1. Line 56, does "sufficiently many states" mean $q\ge C$ for $C>5$? If so, it's better to make this clear.
1. Line 60, broken reference.
1. The discussion in line 54-61 is written in a chaotic way.
Please reorganize the relevant existing results.
1. Line 68, leaves of for
1. I don't understand the point of line 67-70.
Is the message that for **small** $d$, large degree polynomials fail, but for **large** $d$, efficient reconstruction is possible?
In any case, these lines can be made more clear.
1. Line 93, the notation $T$ hasn't even been introduced, so no need to abuse it.
1. Line 96, begin with define
1. Line 126, the notation $X_A = (X_v)_{v\in A}$ hasn't been defined so far (correct me otherwise).
1. Could the authors comment on the equation between line 139-140?
It says that conditioned on the root, the variance of (a low degree function of) the leaves will drop drastically (exponentially in depth).
This seems to imply that $f(X_L)$ is highly correlated with $X_\rho$ and the correlation is **increasing** with the depth.
Apparently my interpretation is very wrong and contradicts the main message of the paper.
Could the authors remark on why this is the correct statement to prove and how this implies Corollary 1.8?
1. Corollary 1.8: Please define the correlation $\mathrm{Cor}$.
1. Line 146, "the main result is optimal in the fractal sense". This is interesting.
Could the authors expand on this (maybe after the fractal capacity and stuff are properly defined later)?
1. Line 154, the font of $b_1, \dots, b_k$ changed.
1. Definition 1.11, to make sure I understand it correctly, $\mathcal{B}(\mathcal{A})$ is **not** the closure (under decomposition) of $\mathcal{A}$, right?
1. Line 171, for $i$ for $i$
1. Line 178, redundant line between end of proof and $\square$.
1. Definition 1.14, an $\mathcal{A}$-polynomial
1. Line 193, $2$-ary --> binary
1. Line 194, eigenvalue --> eigenvalues
1. Line 199, including in cases
1. End of page 6, comparing to --> compared to
1. The last equation of page 6 is unnecessary.
1. Equation in line 202, is last $\lesssim$ simply $=$?
1. I(1), what is $S$?
1. I(2), what is $S'$? In fact $S'$ is not even used in I(2).
1. Line 229, What is $X$? I don't think $X$ is the whole tree?
Also, there's a missing right parenthesis.
1. Line 230, satisfies --> satisfy.
1. Line 234, "builds on this strategy", which strategy? This sentence feels out of place.
1. Equation above line 242, what is $\mathcal{J}$?
1. Line 242, whose variables is --> are
1. Below line 244, $x_{x_{\le w_1}}$.
1. Line 252, will also holds --> hold
1. Equation above line 262, the argument of the function is included on the LHS but not on the RHS of the equation.
1. Somewhere near the end of page 9, the font of $h_{\mathcal{A}_k}$ changed.
1. Not that it matters, but I don't think the authors used the latest version of the NeurIPS template which has a more tedious checklist.
Confidence: 2
Soundness: 3
Presentation: 1
Contribution: 3
Limitations: NA
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Comment: We sincerely appreciate the reviewer to carefully go over the manuscript. The feedback is valuable and we will address the issues raised by the reviewer in the revised version of the paper. Below we will address the some major issues raised by the reviewer:
1. Low-degree polynomial
Thanks for suggesting to add references related low-degree polynomials. We should definitely included some, including in terms of the low degree algorithms for stochastic-block models. We note that almost all the work in this area uses independence of random variables in a strong way and our results are novel as they prove low-degree lower bounds in a setup where they is no obvious way to present the underlying variables as a product measure.
2. Fractal capacity
To our best knowledge, the definition of fractal capacity is new. However, there are many notions of capacity of fractal so ours may be related to some. We will add standard references to fractal capacity.
For the limited space, below we will address those mathematical questions raised by the reviewer:
- Q1:
The statement about BP being linear pertains to the size of $|L|$. Thus, while it is "linear", it is actually a polynomial of Efron-Stein degree exponential in the depth $\ell$.
- Q6-7:
We wanted to briefly discuss the classes of algorithms captured by low-degree polynomials. This is why we mention that they are considered more efficient than SoS algorithms of similar degrees and also that they capture many local / spectral and AMP algorithms. The statement has caveats so perhaps we can being the paragraph in the paper by saying ``it is often argued that".
- Q19:
Consider the random variable $$Y = \mathbb{E}[f(X_L)\,|\, X_\rho]\,.$$
Then, $Y$ can be interpreted as a function of $X$:
When $X = \theta$,
$Y$ is the expected value of $f(X_L)$ condition on $X_\rho = \theta$. The statement of the theorem states that ${\rm Var}(Y)$ is exponentially small in $\ell$ comparing to ${\rm Var}(f(X_L))$.
Suppose $\text{Var}(Y) = 0$ . This means the conditional expectation is the same regardless of the value of $X_\rho$ , which implies that $f(X_L)$ and $X_\rho$ have zero correlation. This can be verified using the definition of correlation:
\begin{align*}
{\rm Corr}(f(X_L), g(X_\rho)) &= \frac{{\rm Cov}(f(X_L), g(X_\rho))}{{\rm Var}(f(X_L))^{1/2} {\rm Var}(g(X_\rho))^{1/2}} .
\end{align*}
Now, for the covariance, we can estimate it via conditioning on $X_\rho$ and then apply Cauchy-Schwartz inequality to get
$$
{\rm Cov}(f(X_L), g(X_\rho))
= \mathbb{E}\Big[(f(X_L) - \mathbb{E}f) (g(X_\rho) - \mathbb{E}g)\Big] = \mathbb{E} \Big[\mathbb{E}\big[(f(X_L) - \mathbb{E}f) (g(X_\rho) - \mathbb{E}g) \,\big|\, X_\rho\big] \Big]
\le \mathbb{E}\Big[\mathbb{E}\big[(f(X_L) - \mathbb{E}f) \,\big|\, X_\rho\big] (g(X_\rho) - \mathbb{E}g) \Big] $$
$$\le \sqrt{\mathbb{E}\Big(\mathbb{E}\big[(f(X_L) - \mathbb{E}f)\Big)^2} \sqrt{ \mathbb{E}(g - \mathbb{E}g)^2}
= \sqrt{{\rm Var}[ \mathbb{E}[f(X_L)\,|\, X_\rho]]} \sqrt{ {\rm Var}g}. $$
Substituting it back and apply the inequality stated in the theorem, we get
$$
{\rm Corr}(f(X_L), g(X_\rho))
\le
(\max\{d\lambda^2, d\lambda\})^{\ell/8},
$$
or applied to the assumption ${\rm Var}(Y)=0$ to get the Correlation is also $0$.\\
- Q21: By "optimal' here it means that, the fractal capacity of a set $S$ can be at most $\ell+1$. Yet, every polynomial of the leaves with fractal-capacity $\le c\ell$ has exponential small correlation with the root $X_\rho$.
In other words, using the trivial upper bound, we captured the correct order of the fractal capacity when reconstruction is not possible. We will added this to the manuscript.
- Q23:
Yes, $\mathcal{B}(\mathcal{A})$ in general is not the closure of $\mathcal{A}$.
Notice that while we define $\mathcal{B}(\mathcal{A})$ for any subcollection of leaves, we only analyze $\mathcal{B}(\mathcal{A})$ in the case when $\mathcal{A}$ is itself closed under decomposition. For such $\mathcal{A}$, $\mathcal{B}(\mathcal{A})$ will get larger as long as $\mathcal{A}$ does not contain every non-empty subset of $L$.
Consider the simplest case when $\mathcal{A} = \mathcal{A}_1$, which is the set of singletons. Then it is not hard to check that every two element sets is contained in ${\mathcal B}({\mathcal A})$.
- Q33 and 35: Here we use $f_\alpha(x_S)$ to indicate that $f_\alpha$ is a function of $x_S$ for some $S \subseteq L$, and the same for $f_\beta(x_{S'})$. And sometimes we simply write $f_\alpha(x)$ without specifying the the set $x_S$ (for Q35.) For I(2), there is a typo to be corrected: it should be "$S' \subseteq L$ satisfying $S' \cap \{v' \in L : v' \leq w'\} = \emptyset$." For the equation between 223 and 224 to hold, one simply needs $S' \cap \{v' \in L : v' \leq w'\} = \emptyset$, so that one could apply conditional independence.
---
Rebuttal 2:
Comment: I thank the authors for the detailed response.
In the response to Q19 where $\mathrm{Cov}(f(X_L), g(X_\rho))$ is upper bounded, there seem to be some inaccuracies, if I'm not mistaken.
- The first $\le$ is an equality.
- In the second $\le$, there is a missing conditioning on $X_\rho$ in the first term.
Others look good to me.
I have raised my score to 6.
---
Rebuttal Comment 2.1:
Comment: We would like to thank the reviewer for the consideration!
For Q19, the reviewer's comment is correct. The first inequality is indeed an inequality, and the second inequality is missing the conditioning. | Summary: This submission considers the broadcasting on trees problem, where given a rooted tree, information is propagated from the root to the leaves using a Markov process. The algorithmic task is to infer the information at the root given only the information at the leaves. Previous works had identified that the KS-threshold (a threshold based on the spectral gap of the Markov process and the degree-structure of the tree) is the critical threshold when this is possible information-theoretically in some specific cases. However, in general it is not the right information-theoretic threshold, i.e., in some cases inference is possible even below it. The submission gives evidence that it is indeed the right threshold however when considering computationally bounded algorithms. In particular, it establishes hardness in the low-degree framework below the KS-threshold.
Strengths: Previous work had asked the question whether the KS-threshold is the correct threshold for efficient algorithms. This work answers this question in the affirmative. The ideas used in the proof are novel and clever.
Weaknesses: The first part of the introduction is a bit hard to follow for non-experts, maybe some additional context would help.
Technical Quality: 3
Clarity: 3
Questions for Authors: No questions.
Confidence: 2
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | null | null | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Topological obstruction to the training of shallow ReLU neural networks | Accept (poster) | Summary: This works studies how topological properties affect loss landscape of neural networks, showing that the loss can be divided into disconnected regions where reaching one region from the other using GF is impossible. The theory is shown to have consequence for understanding the training of simple networks
Strengths: Studying topological properties of the loss landscape is interesting and should be encouraged, the theory is solid and convincing
Weaknesses: This paper has quite a few weaknesses in its current form, and these prevent me from recommending acceptance at this stage
1. Prior works. (a) it is unclear how different this work is from Safran et al. [35]. This point is never discussed, and this makes it difficult for me to judge the novelty of this work. (b) Incorrect reference. Eq. (11) appears the earliest in https://arxiv.org/abs/1312.6120, not in Du et al. [11]. (c) emergence of topological obstruction due to permutation/rescaling symmetries have been studied in https://arxiv.org/abs/2309.16932, and the authors need to compare the results to clarify what is novel in the present manuscript
2. The main result applies only to two layer bias-less ReLU nets, which I feel to be too weak
3. The meaningful implications (corollary 1) is only relevant when either the input or the output dimension is 1, which makes the theory very unlikely to the relevant to practice
4. The experiment also has very limited scope. If the authors can show that the effect they studied is relevant for training a much larger model on a more realistic dataset, I would be more convinced of the relevance and contribution of the theory
Technical Quality: 3
Clarity: 3
Questions for Authors: Does the results still hold when there is a bias term?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: The paper could also benefit by discussing more of its limitations
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Colleague, thank you very much for your useful and insightful review.
___
**Weaknesses.**
**W1.**
- **(a)** Thanks for this question, an answer to which we believe can improve the quality of our paper. There are several differences, namely: in [35], the input-output layers are restricted to be 1-dimensional; second, they are interested in analyzing the number of PL components for the trained networks, i.e., the decomposition of the Input Space into convex polytopes over which the function associated with the NN is linear. We are rather trying to understand the geometry and the topology of the weights space under GF training. We will insert this clarification in the main text.
- **(b)** A result close to Eq. (11) is indeed found there, although applied to *linear* neural networks without ReLU activations. We nonetheless agree that it should be mentioned in the main text as preceding Du et al. [11].
- **(c)** We thank you for highlighting this work, which we were not aware of at the time we submitted ours. Nevertheless, following your pointer, we noticed that it has now been published in ICML. In that paper, the authors study general mirror-reflect symmetries (which include rescaling) of the loss function and derive the constraints that they impose on the gradient and, as a consequence, on gradient-based optimization. While they derive interesting results from their framework, like the emergence of sparsity, their work doesn't deal with the topology of such constraints, which is the main goal of our paper. We will be happy to include it in the references and point out what we have just observed.
**W2.**
As we mention in line 91 of the paper, we discuss the inclusion of biases in Appendix E. The results we find are mostly left unchanged as biases can be treated in the same way as the parameters of the input weights, effectively resulting in a network with $d+1$ inputs.
This shallow network as a model has limitations but our work could become a theoretical stepstone for studying other architectures. Indeed, this framework may be found inside other models (CNNs) or mechanisms (Graph Attention Network, need to check). Besides, there is an overly extended scientific literature on shallow ReLU networks, as can be appreciated in ref. A B C, and, in particular, in the find survey on shallow with a lot of citations, which will be added to our reference also in answering your important question.
**W3.**
Corollary 1 is a mathematical fact independent of the data nature, and its meaning relies on being a structural characteristic of shallow ReLU tout-court. Concerning the likelihood of facing these situations, there are two distinct considerations to be done: first, there are multiple widely used tasks where the output is a single scalar, for example, binary classification and scalar regression, and there is boundless literature on this architecture; on the other hand, we agree the input dimension equal to 1 may be of theoretical interest only.
**W4.**
Please find in part 2 of the general rebuttal the results of a further experiment that displays the effect of obstruction in a more realistic setting.
---
Rebuttal Comment 1.1:
Title: Thanks for the detailed reply
Comment: The reply addresses my main concerns. Although I still think the main problem is with its unclear relevance. I will raise the contribution rating to fair and the score to 6 | Summary: This paper considers the landscape topology of one hidden layer networks with non-negative homogeneous activations. The main result of this work is that in some cases, the loss landscape may consist of disconnected components that cannot be traversed by gradient flow dynamics. This leads to an obstruction, in the sense that certain initial conditions will result in closed paths away from the global minimum. They further count the number of effective components (in the sense of being equivalent under the symmetries of the problem) and show that this number grows linearly with the number of hidden neurons that cannot change their sign during the dynamics.
The authors provide a full analytical treatment of the problem, supplemented by a simple toy example.
Strengths: I believe the paper has several strengths, in particular:
- **Soundness and clarity** - The authors review some known results, and build upon them in an instructive way, that allows the reader to understand precisely what has been done. Additionally, they provide proofs, as well as intuition for all of their results.
- **Novelty** - As far as I know, this topological analysis is new, with the closest work being Ref. [1], which is still very different. I believe this work can shed light on the training dynamics of GD in this setting.
References:
[1] - https://arxiv.org/pdf/2401.10791
Weaknesses: The main weakness of this work is its **scope**. The work studies in great detail a very specific architecture, but it is unclear how interesting their conclusions are for either theorists or practitioners. Additionally, their numerical/toy example is extremely simple, and it is a bit hard to see how their results extend to more complicated settings, even though they discuss some of these aspects in their limitations section. It would be useful to include more complicated examples, even in the single hidden layer case, as well as contrast these results with cases in which the activations are not ReLU-like, i.e., commenting on how these results break down.
Technical Quality: 3
Clarity: 3
Questions for Authors: Comments and questions below:
1) L52: “studied by under various…” incoherent sentence.
2) L51: “activation” should be “activations” I assume.
3) In Sec. 3.3, the authors consider an arbitrary dataset and a general empirical loss function. The analysis does not depend on these details and is therefore truly a property of the network and the initialization, and so these results should hold for any task. Is this correct?
4) The experimental setup in Sec. 6 is incomplete - the loss is never specified (MSE, CE?), it would be nice to see (empirically) that the results do not depend on it.
5) Can the authors comment on the relation of their work and Refs. [1,2]?
6) What happens to the topological structure if we consider the discrete GD instead of GF? the results should hold for small learning rate (as shown numerically), but there is a possible catapult regime (Ref. [3]) at large learning rate, will the paths not be restricted to connected parts in this case? could the catapult effect cause a transition between disconnected effective manifolds?
[1] https://arxiv.org/pdf/2401.10791
[2] https://arxiv.org/pdf/2402.05626
[3] https://arxiv.org/abs/2003.02218
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: The authors discuss their limitations in a dedicated section.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Colleague, thank you very much for your useful and insightful review.
___
**Questions.**
**Q1,Q2.**
Thank you for the corrections.
**Q3.**
That is correct. Particular activations or datasets may induce other symmetries but our results are independent of their choice.
**Q4.**
Thank you for pointing out the lack of specificity about the loss function, which we will fix in the final version of the paper. We indeed used the MSE loss. You can appreciate the generality w.r.t. the loss function in the experiment described in part 2 of the general rebuttal where the BCE loss is employed.
**Q5.**
The main relation of [1] with our work is in their Lemma 1 (balancing in GF) and the subsequent observation on the constant sign of the the weight on the outgoing edge, when $e=1$, which is the same observation proved in our Corollary 3. Nevertheless, while it is not difficult to prove corollary 3 as a consequence of the balance equations in the case e=1, our topological approach not only shows that the obstruction happens if and only if e=1, but also it explains in precise mathematical terms why the obstruction appears or not.
In [2], the authors study one-layer networks with homogeneous activation functions and, in the case of MSE loss, they derive a condition for a stationary point to be a local minimum.
This condition prescribes that there must be no *escape neurons* which, intuitively, provide escape directions from a saddle point.
In the case of a scalar output ($e=1$), they prove that this condition is necessary and sufficient and that escape neurons are characterized by having $W^{(2)}_k = 0$.
The general goals of the paper are different from ours, as we don't make any assumption on the loss function and, therefore, we don't aim at characterizing its local or global minima. However, it would be interesting to study the relations between the two works, in particular how our *pathological neurons* relate to the escape neurons.
**Q6.**
Please refer to part 3 of the general rebuttal, where we show one example of gradient descent circumventing the obstruction.
At the moment, it is difficult to know if this is due to the catapult mechanism, and we think that understanding it is an interesting future research direction.
___
**Weaknesses**
**W1.** *The main weakness of this work is its scope. The work studies in great detail a very specific architecture, but it is unclear how interesting their conclusions are for either theorists or practitioners.*
This shallow network as a model has limitations, but our work could become a theoretical starting point for studying other architectures in a similar way. Indeed, single-layer ReLU neural networks are commonly embedded as sub-networks of architectures like CNNs, which could show obstructions of the same kind studied here.
The implications are not direct on the practical side, but initialization schemes could benefit from circumventing the possibility of obstruction by having $c_k>0$ for every hidden neuron.
**W2.** *Additionally, their numerical/toy example is extremely simple, and it is a bit hard to see how their results extend to more complicated settings, even though they discuss some of these aspects in their limitations section. It would be useful to include more complicated examples, even in the single hidden layer case, as well as contrast these results with cases in which the activations are not ReLU-like, i.e., commenting on how these results break down.*
Please refer to part 2 of the general rebuttal, where we describe a further, more realistic experiment.
---
Rebuttal Comment 1.1:
Title: Reply to the Authors
Comment: I thank the authors for their detailed reply, as well as their global reply and further experiments.
I believe the paper should be accepted, but due to its limited scope, and after reading other reviews, including that of Reviewer JxYP, which addressed regularization and the "building block" argument etc., I believe that a higher score is not warranted.
I do recommend that the authors delve deeper into the finite learning rate in future works, and especially the lr=0.533 example that was found, to understand how the obstruction is circumvented as a theoretically motivated reason for using large learning rates. | Summary: In this paper, authors analyze the performance of gradient-descent optimization over a two-layer neural network. Authors reveal the presence of obstructions in the loss landscape and explore their topology. Finally, they identify the cause of those optimization obstructions in the so-called `pathological’ neurons—neurons that can’t change the sign of their output weight.
Strengths: 1) An interesting and novel approach.
2) A step up from well studied gradient descent optimization of one hidden-layer networks.
3) Solid theoretical background and strict mathematical proofs of main results.
Weaknesses: 1) This paper identifies pathological neurons that disrupt the performance of gradient descent over ReLU activations; however, nothing is said about how such obstructions can be avoided or how their effect can be mitigated.
2) The scope of the research is limited to neural networks with two hidden layers.
3) The possible benefits for further research and practical applications of the achieved results are somewhat vague.
Technical Quality: 3
Clarity: 3
Questions for Authors: See Weaknesses section.
Confidence: 1
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: Yes, the limitations are addressed. This is a purely theoretic work, dealing with a somewhat simplified example of a two layer fully-connected neural network.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Colleague, thank you very much for your useful and insightful review.
___
**Weaknesses.**
**W1.**
As we mention in lines 245 and 246, one can easily control the initialization to have $c_k\geq 0$ for every hidden neuron by employing, for example, Proposition 4. Once that holds we have that the invariant set is connected and the topological obstruction is avoided. Besides, please take a look at part 1 of the general rebuttal, where we address the probability of having obstructions under some of the more common initialization schemes.
**W2.**
We agree that being restricted to two-layer neural networks is the main limitation of our work. We are currently working on extending the results to multiple layers, and we are obtaining encouraging results. However, we believe that the two-layer case provides an elegant and important proof of concept on the possibility of studying topological obstructions to learning. Moreover, while most architectures are not two-layer neural networks, most of them contain two-layer networks as building blocks, and therefore, we can conjecture that obstructions like the ones studied in our work can occur even in that case.
**W3.**
While ours is a theoretical work, we believe that the main practical implication is that, when training neural networks with scalar output, one should check the values of $c_k$ for each hidden neuron and, if possible, choosing an initialization that ensures $c_k>0$. In that way, in fact, we can be sure that no topological obstruction to learning of this kind can occur.
The future research directions, in our opinion, are many and interesting. First, the extension to the multilayer case is the most natural one and can also provide insights into other architectures, like CNNs, which can be seen as subnetworks of MLPs. Second, the literature provides us with other symmetries, like the scaling symmetry given by batch-norm and the translation symmetry given by adding a softmax before the loss. These symmetries induce constraints which can be studied with our framework.
---
Rebuttal Comment 1.1:
Title: Thank you for the clarifications
Comment: Thank you for answering my questions and providing clarifications; I have also carefully read the reviews from other reviewers (and rebuttals to them). I found this paper quite interesting and I lean towards thinking it worthy of acceptance. Unfortunately, my prior experience in the optimization theory is rather limited, and I will hold on my current score for this paper (6). | Summary: This paper studies the topological obstruction in the loss landscape of two-layer ReLU neural networks under gradient flow. Using conserved quantities in gradient flow, the authors define invariant sets, to which gradient flow trajectories are constrained. They then show that when the input or output of the network is a scalar, the invariant set may have more than one connected components depending on initialization. This leads to a topological obstruction, as gradient flow cannot reach a global minimum if parameters are initialized in the component that does not contain one. After taking scaling and permutation symmetry into account, the number of effective components scales linearly with the number of pathological neurons, which are neurons that cannot change the sign of their output weight. On a toy example with two hidden neurons, the authors fully characterize the invariant sets and show theoretically and empirically that gradient flows that do not start in the same effective component as the global minimum cannot reach the global minimum.
Strengths: - This paper takes a novel approach of studying the topology of loss landscapes by examining invariant sets. While many work studies the loss landscape by proving topological properties of sub-level sets, this paper pursues an orthogonal direction by analyzing the topological properties of the set of reachable parameters by training trajectories. This approach combines the study of loss landscape and optimization dynamics, which leads to the original discovery of topological obstructions to the training of ReLU networks.
- The paper provides concrete examples where gradient flows from certain initializations cannot converge to a global minimum and gives practical suggestions on how to choose initializations to avoid this issue, at least for two-layer ReLU networks.
- The results on when invariant sets can be disconnected and the number of effective components provide valuable information on the loss landscape. These results could potentially lead to a new line of work on the trainability of various architectures under common optimization algorithms.
- The paper is very well written. Setups and mathematical concepts are explained in an accessible way without losing rigor. The theorems and their implications are explained clearly. The visualizations (Figure 1-3) are informative and provide clear intuitions. The toy example in Section 6 is simple and effectively demonstrates a case where topological obstruction prevents gradient flow from reaching the global minimum.
Weaknesses: The scope of the analysis, which is restricted to two layer ReLU neural network and gradient flow, is rather limited.
- While ReLU is a popular activation function, two-layer networks are rare in today’s machine learning models. The results in the paper do not readily extend to deeper networks, where invariant sets no longer factorize easily into product spaces.
- Since SGD is known to have an implicit bias characterized by a drift of $c$, training trajectories in practical settings (with non-infinitesimal learning rate) will not stay in the same invariant set. Even if they stay close to an invariant set throughout training, the noise in each step may take the parameters to a different component.
If extending the analysis beyond this setup is challenging, additional experiments on different settings could also help illustrate that the topological obstruction is a realistic hindrance in practice. For example, for deeper ReLU networks, it might be possible to investigate empirically whether there exist initializations that lead to similar obstructions as observed in Figure 3c.
Technical Quality: 3
Clarity: 3
Questions for Authors: - For ReLU networks with one hidden layer, does every invariant set contain a global minimum? If not, then even if an invariant set is connected, it is not guaranteed that all gradient flows starting from points on this set can reach a global minimum. This is not the type of obstruction considered in the paper, but I am curious about whether there are other considerations when choosing the initialization, besides ensuring the connectedness of the invariant set.
- Assume that an invariant set contains a global minimum. Do all gradient flows in this invariant set converge to a global minimum?
- For a wider two-layer ReLU network, using common initialization methods (such as Xavier initialization), what is the probability that the invariant set has more than one effective components? What is the probability that the initialization falls in the component that does not contain a global minimum? Is it nonzero?
- Do the rest of Betti numbers, other than $\beta_0$, provide useful information about the loss landscape and the training process?
**Minor issues / suggestions**
- Line 135-136: While allowing a finite sequence of actions is arguably more intuitive, it might suffice to just define observationally equivalence as being related by a composition of one $T$ and one $P$. As a consequence of Lemma 3 in the appendix, for any finite sequence of T and P, there exists a sequence of T and P with length at most 2 that produces the equivalent transformation.
- In Equation 11, is the second expression missing a factor of 2?
- The sentence in line 483 is not grammatically correct.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: The authors are upfront about the limitations and include a limitation section in the appendix.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Colleague, thank you very much for your useful and insightful review.
___
**Weaknesses.**
**W1.**
It is true that our results are limited to one-layer neural networks but we believe that, even if they cannot be generalized to the multi-layer case, they could be interesting when studying architectures which employ one-layer NNs as building blocks (like CNNs).
**W2.**
Indeed, both SGD and GD with finite learning rate will drift away from the invariant set and, if $c_k$ is small enough, we find that this drift can push the trajectory to circumvent the obstruction (see part 3 of the general rebuttal). We stress, however, that from the point of view of our analysis, there are no differences between continuous time GD and continuous time SGD. This comes from the fact that the latter can be seen as a GF with respect to a different loss which depends only on a subset of the dataset (a mini batch) and thus is also invariant to the same rescaling action.
___
**Questions.**
**Q1.**
Thank you for the interesting question. The answer is to be found in Proposition 4. What it tells us is that, for every non-degenerate parameter $\theta$, there is a rescaling that takes $\theta$ to an observationally equivalent one. This means that every invariant set contains copies of all (non-degenerate) parameters and, therefore, the global minimum too.
**Q2.**
Unfortunately, we cannot answer this question a priori as the actual GF trajectory will depend on the specifics of the loss function, i.e., its functional form and the dataset used. By staying completely general, we cannot exclude that the training will get stuck in a local minimum, even if the initialization is in the right component. What we can say is that if $\mathcal{H}(c)$ is not connected, then, surely, a global optimum in another component cannot be reached.
**Q3.**
For the same reasons described in the reply above, we cannot, staying completely general, quantify whether a global minimum will be reached. We can, however, compute some statistics about the topological properties of the invariant set under common initialization schemes. Please see part 1 of the general rebuttal for a thorough answer about the probability of connectedness.
**Q4.**
This is a very interesting question for which we do not have a clear answer yet. While it is clear how connectedness can affect the learning trajectory, it is not clear if and how higher-order holes can affect it.
**Minor issues/suggestions.**
Thank you for pointing out these mistakes, which we will amend in the camera-ready proof, hopefully.
---
Rebuttal Comment 1.1:
Title: Thanks for response
Comment: Thank you for addressing my comments and questions. I do not have further questions and will maintain my positive rating. | Rebuttal 1:
Rebuttal: We thank the reviewers for their insightful and thorough assessments of our work.
Prompted by some of the reviewers' questions, we performed three additional experiments to clarify some of the points raised.
### **1. Probability of obstruction**
Let us consider the following question: *what is the probability of having a disconnected invariant set given a realistic initialization?*
Consider a one-layer neural network with $e=1$ and assume that the weights are sampled independently of one another from a normal distribution
$W^{(1)}\_{ki}\sim\mathcal{N}(0,\sigma_1^2)\ \forall k,i$ and $W^{(2)}\_{k}\sim\mathcal{N}(0,\sigma_2^2)\ \forall k,j$.
From Corollary 2, we know that the invariant set $\mathcal{H}(c)$ will be disconnected if and only if there exists a hidden neuron satisfying the following $\sum_{i=1}^d (W^{(1)}_{ki})^2 < (W^{(2)}_k)^2$.
Given the independence of weight sampling, this probability can be computed as
$\mathbb{P}[\text{obstruction}] = 1-\mathbb{P}[\sum_{i=1}^d (W^{(1)}_{ki})^2 > (W^{(2)}_k)^2]^l$,
which it is not hard to show to be equal to $1-(F_{1,d}(d\sigma_1^2/\sigma_2^2))^l$, where $F$ is the cdf of the Fisher-Snedecor distribution.
Having obtained this general expression, we can specify it to two common initialization schemes.
- We obtain *Kaiming initialization* [1] with $\sigma_1^2 = 2/d, \sigma_2^2 = 2/l$ resulting in $\mathbb{P}[\text{obstruction}] = 1 - F_{1,d}(l)^l$;
- We obtain *Xavier normal initialization* [2] with $\sigma_1^2 = 2/(d+l), \sigma_2^2 = 2/(1+l)$ resulting in $\mathbb{P}[\text{obstruction}] = 1 - F_{1,d}(\frac{d+ld}{d+l})^l$.
We plot these two expressions in Figure 1 of the extra file we uploaded.
We can clearly see how, for large values of $d$, the probability of obstruction quickly falls to 0 for any number of hidden neurons.
Instead, for small values of $d$, we see an opposite trend: the probability of disconnectedness grows with $l$.
Moreover, it is interesting to notice that the region of high obstruction probability is much larger for Xavier initialization than for Kaiming initialization, further showing that the latter is preferred when working with ReLU networks.
As a complement, we also observe that, by using the binomial distribution, it is possible to show that the probability of having $2^B$ disconnected regions, assuming the same sampling scheme described above, is ${ l \choose B}p^B p^{l-B}$, where $p=1-F_{1,d}(z)$, with $z$ the opportune argument depending on the variances or the layers' sizes.
### **2. A more realistic experiment**
We present here a further experiment to show how the topological obstruction presented in the paper can be a hindrance in a more realistic setting.
We consider a simple binary classification task on the well-known *breast cancer* dataset [3] which we try to solve by fitting one-layer ReLU neural networks trained to minimize BCE loss.
We vary the number of hidden neurons $l$ and, for each $l$, we change the number of neurons $k$ such that $c_k>0$ from 1 to $l$.
By repeating the experiment 100 times, we can show how the model's average performance changes when the degree of disconnectedness of its invariant set is varied.
The result, on the left panel of Fig. 2 clearly shows the presence of a "gradient" in performance, where increasing the number of positive $c_k$ (non-pathological neurons) tends to decrease the average value of the test loss after training.
### **3. Finite step size**
The results presented in our work hold when the network is trained with continuous-time GF so it is natural to wonder what changes when we consider finite-step size gradient descent.
While a formal or thorough numerical analysis is outside the scope of this work, we show in a simple setting how *it is possible* for a large enough step size to make a trajectory abandon the initial $\mathcal{H}(c)$ and circumvent the obstruction.
We consider the same numerical setup of Section 6 and initialize the parameters on the wrong component of a $\mathcal{H}(c)$ with $c_1=c_2=-0.03$ so that it results in disconnection while also being small enough to make the component close and facilitate the trajectory's escape.
In Fig. 3 of the supplementary pdf, we see how raising the learning rate high enough can change the $c_k$ of the first neuron, pushing it over the obstruction and allowing it to reach the global minimum close to the other component.
- [1] *Delving deep into rectifiers: Surpassing human-level performance on imagenet classification - He, K., Zhang, X., Ren, S. \& Sun, J. (2015).*
- [2] *Understanding the difficulty of training deep feedforward neural networks - Glorot, X. \& Bengio, Y. (2010).*
- [3] *Breast Cancer Wisconsin (Diagnostic). UCI Machine Learning Repository - Wolberg, W., Mangasarian, O., Street, N. \& Street, W. (1995).*
Pdf: /pdf/cbd6a0f440c420b686c3c522852e2b2084edcc14.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | Summary: Summary of the paper:
* The paper undertakes a theoretical 'reachability' analysis for neural
networks trained with gradient flow, identifying 'obstructions' separating
certain regions of parameter space from each other.
* The setting is one-hidden-layer networks with homogeneous activation
functions, including ReLUs. The main paper considers the biasless case but
the authors extend their results to the case with biases in the appendices,
with only minor modifications to the results.
* The authors consider training such networks with gradient flow and identify
that regardless of the (output-dependent) loss function that gradient flow
conserves a certain norm of the weights confining the training trajectory
to a subset of the parameter space they call the 'invariant set' which is a
product of hyperquadrics associated with each hidden unit in the network.
* Through topological analysis the authors derive the Poincaré polynomial of
such invariant sets and in turn their zeroth Betti number, in effect
counting the number of connected components in the invariant set.
* These components are of interest because if gradient flow is initialised
in one component it is impossible for the training trajectory to find its
way to another, and training is limited to the best solution available in
the initial component.
* In this setting the zeroth Betti number (the number of connected
components) is only one for networks with multidimensional inputs and
outputs, indicating no topological obstructions.
* In contrast, for networks with scalar inputs or outputs, it is
exponential in the number of hidden units (with a base depending on the
network initialisation), indicating many topological obstructions.
* (For networks with biases, the space is connected if and only if the
network has a multidimensional output).
* The authors then revise their count to take rescaling and permutation
symmetries into consideration.
* The reasoning is that if gradient flow can only explore its initial
connected component, this is only a problem when the component is
actually missing network functions rather than when it is merely missing
some (but not all) implementations of those functions. Perhaps only one
reachable implementation of the target function is needed.
* The result is that there are fewer 'effectively' connected components,
linear rather than exponential in the number of hidden units (with
constants again depending on the initialisation).
* However, the point is that there are still some topological obstructions
to worry about.
* The authors provide a clear 'proof of concept' of their work in the form of
a numerical experiment simulating gradient flow by learning a small neural
network with gradient descent using a small learning rate. The results
clearly show that the learning trajectory is confined to the invariant set
and obstructed from reaching the global minimum as predicted.
I wrote a bit of a long review, so here is also a summary of my review:
* The main contribution, demonstrating the existence of topological
obstructions to gradient flow, appears novel, interesting, and an important
contribution to our understanding of the structure of parameter space.
* I also think the paper is exceptionally well written and clear and the
analysis is elegant. I think the authors have discussed the limitations of
the work well. I am left with only a small number of questions.
* My main concern is that the results show topological obstructions only
occur under some strong assumptions, and in fact they do not arise in
more situations. I find this somewhat at odds with the framing of the
contribution.
Overall, I think the work is worthy of acceptance because it is elegant,
interesting, informative, and thought provoking, but I maintain concerns
about whether the framing is accurate.
Strengths: I thank the authors for submitting their elegant and well-presented theoretical
analysis exploring an important topic.
* Topological analysis of gradient flow is an interesting and apparently
novel approach that stands to make a meaningful contribution to the field's
theoretical understanding of the 'global' structure of parameter space.
* The work clearly exhibits 'topological obstructions' in a restricted
setting, which is a thought-provoking phenomenon (though I have some
reservations about its relevance in practical settings, see below).
* I appreciate the elegant and rigorous application of powerful, established
tools from the field of topology. The theoretical results are also
supported with a clear, small-scale proof-of-concept numerical experiment.
* The presentation of the theory and experiments is very clear and
accessible. The figures are carefully designed and informative. The
technical details of the framework and results are complete and made the
paper quite accessible to me, even though I have limited knowledge of
topology (the primer in the appendix was appreciated).
* Clear, thorough acknowledgement of related and prior work in the dedicated
section and throughout the paper.
Weaknesses: The authors have convincingly demonstrated the existence of topological
obstructions under the stated assumptions. My main concern about this work is
the relevance of topological obstructions to deep learning practice.
1. I am not so concerned about the restriction to single-hidden-layer
networks. The results may not generalise to the multi-layer case, or
perhaps they will, it seems worth looking into with future work. However,
I am concerned that *even in the single-hidden-layer setting* the main
disconnectedness result holds only for networks with scalar output. The
authors clearly show that networks with multidimensional outputs do not
show obstructions.
With this in mind, do the authors believe that the topological
obstructions they have identified are relevant to deep learning practice?
If so, can they clarify this point?
2. Furthermore, I have concerns about the gap between gradient flow and
discrete gradient descent training. The authors explain that discretised
gradient methods are not confined to the same invariant sets as gradient
flow. It seems possible that this means discrete gradient methods count
circumvent topological obstructions, even if they exist in practical
settings.
1. Do the authors conjecture that the identified obstructions have
implications for gradient descent training?
2. If the authors would support such an 'obstruction hypothesis', I would
invite them to consider explicitly stating the hypothesis in their
introduction and/or conclusion.
3. The authors could also consider extending their simple numerical
experiments to explore whether and to what extent the demonstrated
obstruction affects training with larger learning rates or
momentum-based training methods.
Pending the authors' clarification of these points, the framing of the work
is somewhat confusing to me. It appears to me that the authors have shown
that topological obstructions to gradient flow are actually quite a
restricted phenomenon. An alternative framing for the paper's results would
be to say that they show that deep learning is often *not* held back by
topological obstructions. Could the authors comment on this alternative
framing?
I don't think this concern is fatal to the paper. The topological analysis
approach and most of the results are still informative and worth sharing even
if the topological obstructions themselves end up not arising in more
practical settings.
Technical Quality: 4
Clarity: 4
Questions for Authors: 1. The authors have presented a clear interpretation of the source of the
topological obstruction in terms of the relationship between each
unit's weights (lines 232 onwards). Do the authors have a similarly
enlightening interpretation of the breakdown of the topological
obstructions in the case with multidimensional outputs?
2. Another assumption of the work is that the loss function depends only on
the network weights via the output of the network. Am I correct in
understanding that if the loss function includes a regularisation term
that depends on, for example, an $L_p$ norm of the weights, then this
assumption would be violated and the analysis would not hold because
gradient flow would no longer always be perpendicular to the invariant
sets? This seems fine but might be worth acknowledging.
3. Appendix E extends the analysis to networks including biases, but stops
after deriving the zeroth Betti number in this setting. Could you please
clarify whether the analysis of section 5 is unchanged in this setting?
4. The contribution list does not mention the 'proof of concept' numerical
experiments. It seems to me that these experiments are a reasonably
important part of the paper's contribution and its evaluation to warrant
mentioning in the introduction (as well as the abstract).
5. Line 118: The authors describe the transformations as sending parameters
to different but observationally equivalent parameters. It is typically
true that the parameter produced is different but for some parameters or
for some transformations the transformations don't change the parameter.
Confidence: 4
Soundness: 4
Presentation: 4
Contribution: 3
Limitations: The paper accurately describes the contribution including its limitations.
These limitations are adequately and carefully disclosed in multiple places
in the main text and discussed in detail in appendix F.
1. One main limitation is the simplicity of the architecture, with only one
hidden layer. The authors discuss barriers to extending the work to the
multi-layer case, namely the interaction between layers, which is left to
future work.
2. A second main limitation is the fundamental assumption of gradient flow.
The authors explain that the conservation laws that are crucial to their
analysis do not hold for discretised gradient descent.
3. A third main limitation is that the topological obstructions only exist in
the case of networks with a scalar output. This is clearly acknowledged in
the abstract and throughout the paper, but I think the authors could have
more thoroughly discussed the implications of this restriction, namely
that in many cases we should not expect topological constructions at all.
If the work is to be accepted I would encourage the authors to include the
valuable discussion in appendix F in the main text, and to extend this
discussion along the lines of (3).
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Colleague, thank you very much for your useful and insightful review.
---
**Weaknesses**
**W1.**
At the moment it is hard to foresee if and how this can be relevant to deep learning practice as the results need to be extended to more general settings.
Provided the same holds also for multilayer networks, then we could see this obstruction being relevant as MLPs are often employed as building blocks of commonly used architectures like CNNs.
Focusing on the single-layer case, we think that the impact of obstruction is also related to its width and the particular task at hand: if the NN is large and the task is not hard, it will probably be able to solve it without employing its pathological neurons, provided they are not too many.
Assuming, however, that the network is not too large, there are specific settings where obstructions could be a hindrance.
As you highlight, the case of scalar outputs is a limited scenario that is nonetheless relevant in tasks like binary classifications.
As you can see from part 2 of the general rebuttal (Fig. 2 in the extra pdf file), we observe that, by training with GD, increasing the number of pathological neurons at initialization reduces the average classification performance.
**W2.1**
The precise relation with finite-step size GD is an interesting direction that we will certainly explore in the future. From the simple preliminary experiments we performed, we tend to see that, even when the trajectories drift away from the invariant set, such drift is usually not able to push them over the obstruction. The experiment described in part 3 of the general rebuttal shows, however, that this behavior can occur for some values of the learning rate, which are not too low nor too high. Our conjecture is that, in general, this is more likely to happen when the initial value of $c_k$ is negative but small so that the connected components are close to one another.
**W2.2**
Given that all of our analysis is based on the loss functions' invariance w.r.t. to rescaling and the parameters evolving through GF, we think that the addition of momentum would break the setup and result in trajectories that are not constrained on $\mathcal{H}(c)$.
Regarding the effects of higher learning rates, please refer to part 3 of the general rebuttal, where we show a simple experiment in which the trajectories circumvent the obstruction.
**W2.3.**
We agree that, from this first set of results, it seems that the topological obstruction induced by symmetries is a restricted phenomenon. However, the fact that there is the possibility of a fundamental obstruction to training is not something that is commonly known or expected by the community. Therefore, we believe that the framing we gave to the paper is justified, as it serves to highlight this possibility. Moreover, while our analysis clearly identifies obstructions in some cases, it is, in principle, hard to exclude that extending the results to more general settings will not uncover other obstructions of a similar kind. Stating that "deep learning is often not held back by topological obstructions" would, therefore, be a bit optimistic at this point, even if the probabilistic analysis of part 1 of the general rebuttal seems to corroborate this conjecture for two-layer networks.
---
**Questions.**
**Q1.**
Yes, we can give some intuition on why there is no obstruction for multiple outputs. Let us first consider a single hidden neuron, with incoming weights $w^{(1)}\in\mathbb{R}^d$ and one single output $w^{(2)}\in\mathbb{R}^d$. If the neuron is pathological, we have that $||w^{(1)}||^2_2 - (w^{(2)})^2 = c$ with $c<0$. Since the weights $w(t)$ are continuous, for $w^{(2)}$ to change sign, its value needs to pass through 0 but, under the condition above, this cannot happen as $(w^{(2)})^2 = ||w^{(1)}||^2_2 - c >0 \implies w^{(2)}\neq 0$. Consider now multiple outputs $w^{(2)}\in\mathbb{R}^e$, resulting in the condition being $||w^{(1)}||^2_2 - ||w^{(2)}||^2_2 = c$. Now, any component of $w^{(2)}$ can change sign by passing through 0 because the other components can compensate for it by increasing their magnitude to keep the condition satisfied.
**Q2.**
Yes, this is correct. If we add $L_p$ regularization, then the loss function is no longer invariant to the action of the rescaling transformation, and thus, the learning trajectory is, in general, not constrained anymore to lie on $\mathcal{H}(c)$.
**Q3.**
Yes, all the results shown in Section 5 also hold when biases are taken into account. This comes from the fact that, from the formal point of view, biases are the same as the weights of $W^{(1)}$. We can thus treat a two-layer NN with biases as a biasless two-layer NN with $d+1$ instead of $d$ inputs. If we extend the permutation action of Eq. (4) to also permute biases, the results will be left practically unchanged except for the first hypothesis of Theorem 2, which will now become $d\geq 1$ instead of $d>1$.
**Q4.**
Thank you for the comment. If accepted, we will add it to the final version of the paper.
**Q5.**
Thank you for the observation. We consider rephrasing that sentence to make it more precise, e.g., "We describe two kinds of transformations whose orbits are made of observationally equivalent parameters only.
---
Rebuttal Comment 1.1:
Title: Thanks for the clarification
Comment: I thank the authors for their clarifying rebuttals. I especially thank the authors for their intuitive explanation of the resolution of the obstruction in the multi-output case. I invite them to include this and their other clarifications in future revisions of the paper. The three new experiments are also a very welcome addition to the submission.
At the moment I am maintaining my recommendation. I retain some of my concerns about the framing and relevance of the results. I am about to post a top-level comment on this topic since concerns about relevance were also raised by a few other reviewers. | null | null | null | null | null | null |
SEA: State-Exchange Attention for High-Fidelity Physics Based Transformers | Accept (poster) | Summary: This paper presents a novel neural network architecture (SEA) for solving partial differential equations (PDEs) in physical problems. This architecture effectively utilizes information exchange between multiple fields and the conservation quantities of physical systems to correct model predictions, achieving smaller errors and improved generalization. This has significant value for industrial applications, as it can greatly reduce the time cost for solving specific problems.
Strengths: **Originality:** The authors propose a novel neural network architecture (SEA) that effectively utilizes information exchange between multiple fields while cleverly incorporating physical conservation quantities. This maximizes the use of the physical system's symmetry to reduce errors.
**Clarity:** The paper clearly defines concepts and provides a clear and explicit explanation of the motivation for proposing this architecture. The presentation of experimental results is detailed, and the overall model structure is presented in an intuitive and clear manner.
**Significance:** The model in this paper offers valuable practical insights into integrating neural networks into PDE solvers, providing a fresh perspective in this field. This has important implications for enhancing productivity in the industrial sector.
**Quality:** The language of the paper is rigorous and clear, with appropriate citations of previous work. The overall approach is straightforward and easy to understand. The experimental results include comparisons with multiple similar models for reasonable and effective evaluation.
Weaknesses: Further exploration of the model's generalization ability is needed, such as evaluating model errors under more complex boundary conditions. Additionally, presenting the training and inference costs of the model would provide a better assessment of its potential for industrial applications.
Technical Quality: 3
Clarity: 4
Questions for Authors: See cooments
Confidence: 4
Soundness: 3
Presentation: 4
Contribution: 4
Limitations: yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ***
**W1. Further exploration of the model's generalization ability is needed, such as evaluating model errors under more complex boundary conditions. Additionally, presenting the training and inference costs of the model would provide a better assessment of its potential for industrial applications.**
Our future work will focus on addressing the scalability of the architecture and evaluating the model on a wider range of scenarios. We plan to conduct further evaluations using the PDEBench dataset and other datasets from different domains. Details on costs and time complexity are discussed in the global rebuttal and provided in the attached PDF.
---
Rebuttal Comment 1.1:
Comment: Thanks for your responses. They help resolve my questions. | Summary: The paper introduces the State-Exchange Attention (SEA) module, a new cross-attention module that enables information exchange between state variables in physics-domain transformer modules. The authors evaluate the performance of this module in both single-phase and multi-phase 2D fluid settings.
Strengths: 1. Novelty: The State-Exchange Attention architecture represents an interesting and new contribution in architecture for physics-domain transformer models. This module enables multi-directional information exchange between state variables, and as far as I know is the first of its kind for physics-domain transformer architectures.
2. Performance: The SEA module significantly reduces rollout error accumulation, which is a major issue in current sequential models. The paper reports substantial improvements over competitive baseline models in both a single-phase and multi-phase context.
3. Strong Evaluation: The SEA-integrated model is well evaluated across different computational fluid dynamics (CFD) cases, showing consistent performance improvements. This includes detailed experiments on both single-phase and multiphase flows.
Weaknesses: 1. Scalability Concerns: The current architecture is relatively small, using a transformer with only 1 layer and 8 attention heads. I am curious how the model will scale at larger architecture sizes. Moreover, as the authors themselves note, there are also concerns about how the model will scale with larger number of state variables.
2. Diversity of experiments: The authors evaluate performance on one instance of single-phase and one instance of multi-phase flow, both in two dimensions. How does the model fair in 1D and 3D settings? How does it fair in a wider variety of fluid mechanics problems (like those presented in PDEBench/PDEArena)?
Minor comments:
- It seems like the general gist of paragraphs on lines 179-186 and lines 187-192 are almost identical. Was this paragraph repeated by accident?
Technical Quality: 4
Clarity: 4
Questions for Authors: 1. Are there ablation studies available for single-phase flow (as is sort of provided for multi-phase flow). In other words, how does a basic transformer fair in this setting?
2. How resource-intensive is training this model vs. traditional transformer architectures? Some indication of this would be very helpful.
Confidence: 3
Soundness: 4
Presentation: 4
Contribution: 3
Limitations: See weaknesses.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ***
**W1. Scalability Concerns: The current architecture is relatively small, using a transformer with only 1 layer and 8 attention heads. I am curious how the model will scale at larger architecture sizes. Moreover, as the authors themselves note, there are also concerns about how the model will scale with larger number of state variables.**
As dictated by the available data and complexity trade-off, increasing the complexity of the model would not be ideal in either case. We carefully studied the architectural aspects and converged on the same architecture as the transformer based works we compared our results to [1,2]. We also provide the our model dimension study in the provided PDF..
***
**W2. Diversity of experiments: The authors evaluate performance on one instance of single-phase and one instance of multi-phase flow, both in two dimensions. How does the model fair in 1D and 3D settings? How does it fair in a wider variety of fluid mechanics problems (like those presented in PDEBench/PDEArena)?**
Our model is specifically designed to address the coupling between state variables governed by different PDEs. In a 1D scenario, reducing the model will yield the same results as those found in the literature, as there is only one state variable and information exchange becomes redundant (here, we focus on the temporal model and not the ViT encoder-decoder). The 2D problems studied, such as the 2D Navier-Stokes and 2D Navier-Stokes with volume of fluid PDE, were chosen due to the strong coupling between the variables. We avoided 3D cases in this study for clearer demonstrations, however the model is by no means limited to 2D.
***
**W3. It seems like the general gist of paragraphs on lines 179-186 and lines 187-192 are almost identical. Was this paragraph repeated by accident?**
This is indeed an editting problem, we appreciate your detailed revision and pointing this out.
***
**Q1. Are there ablation studies available for single-phase flow (as is sort of provided for multi-phase flow). In other words, how does a basic transformer fair in this setting?**
The ablation studies are presented in an additional table within the provided PDF and the supplementary materials. This table demonstrates the model's performance under different information exchange modes: addition of information for exchange, SEA attention mechanism for exchange, and no information exchange. Additionally, this table includes the case where state variables share a mutual latent space, and no explicit information exchange is defined.
***
**Q2. How resource-intensive is training this model vs. traditional transformer architectures? Some indication of this would be very helpful.**
We have added the information regarding the training time to the global rebuttal, and further demonstrated the inference time for the full trajectories of studied cases with respect to the model dimension in the provided PDF.
***
# References
[1] Luning Sun, Xu Han, Han Gao, Jian-Xun Wang, and Liping Liu. Unifying predictions of deter-ministic and stochastic physics in mesh-reduced space with sequential flow generative model. In A. Oh, T. Naumann, A. Globerson, K. Saenko, M. Hardt, and S. Levine, editors, Advances in Neural Information Processing Systems, volume 36, pages 60636–60660. Curran Associates, Inc., 2023.
[2] Xu Han, Han Gao, Tobias Pfaff, Jian-Xun Wang, and Li-Ping Liu. Predicting physics in mesh-reduced space with temporal attention. arXiv preprint arXiv:2201.09113, 2022.
---
Rebuttal Comment 1.1:
Comment: Thank you for your thoughtful response and for the additional details in the provided PDF. I am satisfied with the authors comments and additional details. I will keep my score. | Summary: The authors present a novel and interesting approach to physics-domain transformer models that exchanges information between state variables and demonstrates strong performance improvements relative to SOTA models on hard fluid dynamics problems. The paper is well written and the results compelling.
Strengths: I find the multidimensional information exchange architecture employed by the authors to be interesting, the improvement over existing models to be significant on challenging problems, and the paper and figures to be quite well done. I would like to see more thorough evaluation of the models, perhaps including application to a non-fluid domain or to a problem with a few more underlying state variables, and it is highly desirable that code be made available with the paper.
Weaknesses: #### Major concerns:
1. Perhaps I missed this somewhere, but will code be made available with this paper? It seems too much to require readers to re-implement this architecture to use it.
2. The supplement is helpful but could contain further experimental details, further details on the layers (and their trainability), how projections are performed, and more complete explanation of results and metrics.
3. If training the models takes just 15-20 minutes, it would be nice to see the effects of the embedding dimensionality and simulation parameters explored more thoroughly.
4. The authors point out that a limitation might be scaling to a large number of state variables. Could they present an example on a problem with more than two state variables so that the "multidirectional" component is more than bidirectional, and perhaps add some thoughts on how this might scale?
#### Minor concerns:
There are a number of places the notation could be clarified and some typos, for example:
By line:
- 126: argmax optimization variable is not specified (presumably z_1:T)
- 159: the X variable is never defined, and may be superfluous here... maybe just use z_i^k?
- 166: T should be italic?
- A number of variables throughout seem to be used without introduction.
Technical Quality: 3
Clarity: 3
Questions for Authors: - How does the padding work in the case of graphs with highly variable node density (for example on an adaptive mesh with much denser representation near the cylinder as is common in practice)?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors have touched on potential limitations related to number of state variables but could discuss this in more detail, as many problems of interest contain more than two state variables.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ***
**W1 and W2: Perhaps I missed this somewhere, but will code be made available with this paper? It seems too much to require readers to re-implement this architecture to use it.**
The code is prepared and will be released with the paper. We will also add more information in the implementation of supplementary materials for better reproducibility.
***
**W3: If training the models takes just 15-20 minutes, it would be nice to see the effects of the embedding dimensionality and simulation parameters explored more thoroughly.**
The optimal embedding dimensions were determined for both cases by experimenting. It was found that lower embedding dimensions made reconstruction difficult and amplified errors, while higher dimensions required more computation and led to model overfitting. A detailed study, included in the provided PDF (Figure 1) and will be added to the supplementary material, to further clarify the chosen dimensions.
***
**W4: The authors point out that a limitation might be scaling to a large number of state variables. Could they present an example on a problem with more than two state variables so that the "multidirectional" component is more than bidirectional, and perhaps add some thoughts on how this might scale?**
One potential field in the physics and material sciences with a large number of state variables would be the phase field simulations to capture the mesoscale dynamics of the grains. These 3D problems tend to have one state variable per grain, resulting in M-coupled equations where M represents the number of variables. Another example would be the multiphase flows with more than two phases. Given the history of each variable is resolved with its corresponding transformer, this work becomes limited in these cases. Even though the number of variables does not theoretically limit the model itself the required computation increases quadratic with the number of variables. However, this is not different from other works in the literature where all the variables are in the same latent space, and one transformer resolves the dynamics of all variables. In those cases, the latent dimension must be increased when more variables are stored on the mesh, resulting in higher computation. Scalability is further discussed in the global response. We also intend to address the scalability in the future works and include a wider range of test cases in different domains to evaluate the performance of the model.
***
**minor problems:**
We appreciate the corrections. These problems were all corrected.
***
**Q1. How does the padding work in the case of graphs with highly variable node density (for example on an adaptive mesh with much denser representation near the cylinder as is common in practice)?**
In this study, all patches are uniformly sized, similar to the Vision Transformer (ViT). To ensure consistency, each patch is padded with zeros up to the maximum number of elements found in any patch within the domain. Prior to the mentioned padding, we introduce a small noise in the fields stored on the mesh such that there is no element with an exact value of 0 (In order to keep these elements active in the learning and separate them from the pads). Further, we have adopted a GeLU activation function during the downsampling of each patch, resulting in no contribution from the pads in the encoder-decoder pair. This treatment, by default, captures more information in the patches with high mesh density and lower information in those with a more sparse structure (Highly padded patches).
---
Rebuttal Comment 1.1:
Title: Thank you.
Comment: We are satisfied with the authors response and have increased our score accordingly. | Summary: The study proposes an approach for autoregressive spatiotemporal estimation of a dynamical system state, e.g. solution of time-dependent PDE. In particular, a Vision Transformer (VIT) model is adapted to solving PDEs where each state variable is tokenized similarly to tokenizing an image and a novel State-Exchange Attention (SEA) module allows interaction and fusion of these tokens through cross-filed attention. This enables state variables in the system represented by tokens to exchange information and capture the physical relationships and symmetries between fields. The approach is evaluated on the problem of flow past a cylinder benchmark showing improvement of the error against current neural networks approaches.
Strengths: S1. The work presents a novel system based on vision transformer to solve PDEs in autoregressive way. The system is constructed in logical way and the SEA component developed for interaction of field variable is sensible and appears to be applicable for capturing such interactions.
S2. Evaluation of the model on 2D flow past a cylinder shows advantage of the model vs. existing approaches.
S3. The model seems to be generalizable to other PDEs and even to simulations of physical processes that do not have formulated equations.
Weaknesses: W1. It is claimed that the rollout errors are effectively controlled by the SEA model. There's very little explanation/derivations of the source of these errors. It would be helpful to define these errors rigorously and also if possible to show (graphically or rigorously) how SEA model allows for better control of the error.
W2. Ablations of SEA model components and how their contribution to the overall accuracy of solution are missing. These could inform the critical parts of SEA and better characterize SEA.
W3. While generalization seems plausible, evaluation was performed on only one basic dataset from which introduces uncertainty in how the approach generalizes.
Technical Quality: 3
Clarity: 4
Questions for Authors: Q1. What is the computational complexity (time, storage) of SEA and how does it compare to other approaches and PDE simulation?
Q2. Does SEA apply to high dimensional state variables, e.g. 3D flows or ND state variables? If yes, it would be important to include in the manuscript (appendix) practical steps of extending the model to higher dimensions.
Confidence: 4
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: The technical contributions of this work are subtle since ViT are well known as powerful models and the SEA component seems to be sensible and inherits structure from prior approaches. This could be considered as a limitation, however, on the other hand, the impact of this work lies in the application of solving dynamical equations and simulating dynamical processes and appears to be significant advancement. The ability to implement existing components in such a way that they will be effective is not a straightforward endeavor. Hence, ablation results, discussion of computational complexity and more explanation of the error would be vital in this work. Also, addressing Weaknesses and Questions above would improve clarity, generalization and evaluation of this promising work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ***
**W1. It is claimed that the rollout errors are effectively controlled by the SEA model. There's very little explanation/derivations of the source of these errors. It would be helpful to define these errors rigorously and also if possible to show (graphically or rigorously) how SEA model allows for better control of the error.**
The overall errors of this model with regards to the numerical results (Neglecting the discretization errors) consist of two main components, namely reconstruction error (Relevant to the encoder-decoder) and temporal error (Relevant to the temporal transformer). The SEA module explicitly addresses the temporal error accumulating during inference (Autoregressive generation) to form the total rollout error. Given a set of coupled partial differential equations with M-state variables, the optimal temporal model would estimate the exact solution as the discretized numerical model. It should be noted that a discretized equation for each variable contains all or a set of other state variables since these equations are coupled. Hence, embedding all the variables in the same embedding space with the sole objective of reconstruction loss may result in unjust information from each field, which would result in an additional error term regarding the coupling of different variables. Our work eliminates this source of error by creating different embedding spaces for different variables, and the SEA module mimics the coupling between different variables. A theoretical analysis has been prepared and will be attached to the supplementary materials.
***
**W2. Ablations of SEA model components and how their contribution to the overall accuracy of solution are missing. These could inform the critical parts of SEA and better characterize SEA.**
The ablation study is summarized in table 1 provided in the attached PDF. This table will also be added to the supplementary material for better understanding of the model. The included table contains the following information:
1. performance of the model with SEA module for information exchange
2. Addition of the states to other states for information exchange
3. No information exchange (variables solved with different transformers)
4. A single transformer architecture with a shared latent space for all variables (Common in literature).
This table demonstrates that models utilizing modes of information exchange outperform those without information exchange or with a shared latent space.
***
**W3. While generalization seems plausible, evaluation was performed on only one basic dataset from which introduces uncertainty in how the approach generalizes.**
Cylinder flow and the multiphase cases here were chosen to demonstrate the model's performance in cases with different governing PDEs. In the multiphase case, in addition to the Navier stoke's, the volume of fluid PDE is solved coupled with the momentum. In this case, we focus on demonstrating how the SEA module creates this coupling between volume fraction and velocity. Given the clear decreasing trend of error on every variable, as shown in the ablation table in the provided PDF, and the identified sources of error (Explained in W1), we believe this model generalizes well to cases with different governing PDEs, initial conditions, or boundary conditions.
***
**Q1. What is the computational complexity (time, storage) of SEA and how does it compare to other approaches and PDE simulation?**
The time complexity of the encoder-decoder is similar to the transformer and is unaffected by the number of state variables. Similarly the storage is the same as the ViT base model. However, the temporal model with the SEA module has an additional quadratic term regarding the number of state variables. If we denote the number of state variables as M, the time complexity becomes O(M^2 * D * N^2) where D, and N are the model dimension and the input block size respectively.
The current state-of-the-art studies resolving full trajectories use techniques with a transformer backbone, which is dominated by the self-attention term O(D*N^2) [1,2].
Although the inference time remains orders of magnitude below the numerical simulations, the time complexity increases quadratically with respect to the state variables which is the main limitation of this module. More information on computational aspects are provided in the global section.
**Q2. Does SEA apply to high dimensional state variables, e.g. 3D flows or ND state variables? If yes, it would be important to include in the manuscript (appendix) practical steps of extending the model to higher dimensions.**
The model is capable of 3D flows, and in general n dimensional and multi-variate problems. The procedure for this will be added to the appendix and the the code will be released with the paper which includes a complete implementation for any specified number of variables.
***
# References
[1] Luning Sun, Xu Han, Han Gao, Jian-Xun Wang, and Liping Liu. Unifying predictions of deter-ministic and stochastic physics in mesh-reduced space with sequential flow generative model. In A. Oh, T. Naumann, A. Globerson, K. Saenko, M. Hardt, and S. Levine, editors, Advances in Neural Information Processing Systems, volume 36, pages 60636–60660. Curran Associates, Inc., 2023.
[2] Xu Han, Han Gao, Tobias Pfaff, Jian-Xun Wang, and Li-Ping Liu. Predicting physics in mesh-reduced space with temporal attention. arXiv preprint arXiv:2201.09113, 2022.
---
Rebuttal Comment 1.1:
Title: Authors Rebuttal Response
Comment: I'd like to thank the authors for providing an informative rebuttal and additional results in response to comments that I made in my review. The authors clarified the questions and comments that I made and I would like to encourage the authors to include the results in the provided pdf and their discussion in the revision and also rigorous definition of the involved errors along with succinct explanation as the authors provide in the rebuttal to W1. | Rebuttal 1:
Rebuttal: ***
We would like to appreciate all referees for spending time giving great feedback, and instructions on how to improve the paper.
Here we would like to address two main points that were questioned by all the referees, first the limitation of the work and then the efficiency of the model.
***
# Limitation
We first briefly review the limitation and then suggest a potential way forward for the future works.
The SEA module effectively acting as a cross attention amongst the state variables is quadratic in time with respect to the number of state variables which limits the module extending to large number of variables. The time complexity of this model can be expressed as O(M^2 * D * N^2) , where M is the number of variables, D is the dimension of the model, and N is the sequence length. However, various strategies can potentially be studied in future work to mitigate this complexity. A key technique involves leveraging prior knowledge of the equations or the physical system to inform the cross-attention process. This prior knowledge can be utilized to adjust the cross-attention between variables, ensuring the method is applied only to strongly coupled variables (e.g., velocity and pressure). Another approach involves embedding variables of the same type together. For example, in the 3D Navier-Stokes equations, the three components of velocity would be embedded together, while source terms like pressure and other external or internal forces would be embedded together. In our future work, we will conduct a comprehensive investigation of these factors. Additionally, we will explore the implementation of this model within a probabilistic framework (Guiding the output based on the coupling between variables), leveraging its ability to effectively resolve the history of variables and their coupling and address uncertainty during inference.
***
# Computational aspects
Another point raised by all the reviewers pertains to the computational aspects of the model. We have recorded the training and inference times for both cases. In the paper, the reported training was conducted using teacher forcing on both the training and validation sets. Additionally, we present the training time for the scenario where validation is tracked in an autoregressive manner. While both approaches yield meaningful training, we include the autoregressive validation case because the model's ultimate task is to estimate full trajectories.
The training for all cases is done with a single A100 GPU. In cylinder flow we resolve 5600 time points, with 7697 meshing points, and 3 variables of u, v, and p stored on each point.
For the multiphase case we have a total of 7400 time steps and 8241 meshing points. Similar to the cylinder flow we have three variables here namely u, v, and alpha where alpha is the volume fraction capturing the phases and their interface. The following computational detail will enter the supplementary materials.
The training cost of each case is listed below:
1) Training with validation using teacher forcing
* Cylinder flow: 20-30 Minutes
* Multiphase flow: 60-70 Minutes*
2) Training with validation using autoregressive approach
* Cylinder flow: 45-60 Minutes
* Multiphase flow: 2 Hours
*A typo in the main manuscript reported the multiphase flow training time as 15 minutes. The correct training time is provided here and will be corrected in the paper.
Inference of full trajectory for all cases remains below 30 seconds a detailed plot of which with respect to model dimension is provided in the attached PDF.
***
# Provided PDF
In the provided PDF we have included a detailed ablation study which also contains results of the case with shared latent space between all variables (Table 1). We also evaluate the performance of the time invariant parameter injection module in table 2. Another critical parameter in training this model is the batch size. Increasing the batch size was observed to raise the error in autoregressive generation and potentially cause divergence. Relevant information on the batch size is provided in Table 2. To fully understand the impact of dimension choice, we included several plots illustrating the reconstruction error (MSE) of the ViT encoder-decoder, temporal error (relative MSE), and inference time. These plots, along with the provided training cost, informed our dimension selection.
The provided results in the PDF will be appended to the supplementary materials along with a theoretical analysis of errors to demonstrate which component of error is being addressed by our model.
Pdf: /pdf/f4e24453c4f676767d8baba5502d4a4a7fc708d8.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Beware of Overestimated Decoding Performance Arising from Temporal Autocorrelations in Electroencephalogram Signals | Reject | Summary: The paper highlights how temporal autocorrelations in EEG data can lead to misleadingly high decoding accuracy in brain-computer interface (BCI) tasks. Using a novel approach with a "watermelon EEG dataset," the authors demonstrate that many reported high performances may exploit these autocorrelations rather than genuine neural activity. They propose a unified framework to address this issue across various EEG tasks and recommend improved experimental designs and data splitting strategies to ensure more accurate and reliable results in BCI research.
Strengths: 1. Novel Problem Formulation: The paper introduces a novel problem formulation by addressing the potential overestimation of decoding performance in EEG-based brain-computer interfaces (BCIs) due to temporal autocorrelations. This is an innovative perspective that has not been extensively explored in prior research.
2. Creative Use of Non-Human Subjects: The use of watermelons as a model to eliminate stimulus-driven neural responses is highly original. This approach allows for the isolation of temporal autocorrelation effects in EEG signals, providing a unique method to investigate the problem.
3. Impact on BCI Research: The findings have significant implications for BCI research, highlighting a critical issue that could affect the validity of many existing studies. By identifying and addressing this pitfall, the paper provides good insight for more accurate and reliable BCI systems.
Weaknesses: Plz go and check questions.
Technical Quality: 4
Clarity: 2
Questions for Authors: 1. The paper mentions using a "watermelon EEG dataset" to eliminate stimulus-driven neural responses. What is the scientific rationale and justification for this choice? Why were watermelons chosen over potentially more appropriate models? Have other studies validated the effectiveness of this method?
2. The authors claim that temporal autocorrelations lead to overestimated decoding performance but does not provide a detailed explanation of the specific mechanisms and extent of this impact. How do temporal autocorrelations affect different tasks (e.g., emotion recognition) specifically? Are there quantifiable metrics used to assess this impact?
3. Reproducibility of Experimental Design: The experimental design and data splitting strategies described in the manuscript—are they reproducible across different datasets and experimental conditions? For instance, have similar phenomena been observed with other types of EEG data, such as motor imagery EEG data? Are there specific experimental results supporting this generalizability?
Confidence: 5
Soundness: 4
Presentation: 2
Contribution: 3
Limitations: While the authors recommend avoiding certain data splitting strategies, the practical implications and feasibility of implementing alternative strategies in real-world BCI applications are not fully explored.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We greatly appreciate your careful review and constructive suggestions.
We are pleased that you mentioned
"The findings have significant implications for BCI research,
highlighting a critical issue that could affect the validity of many existing studies",
as this was the initial objective of our work.
**Q1: Questions about phantom EEG**
We have already addressed this issue in the __Author Rebuttal__.
We hope that it has answered your question.
**Q2: Explanation of the specific mechanisms of temporal autocorrelations. How do temporal autocorrelations affect different tasks (e.g., emotion recognition) specifically? Are there quantifiable metrics used to assess this impact?**
This question encompasses three separate questions, each of which we will reply individually.
The specific mechanisms behind temporal-autocorrelations (TA) induced overestimated decoding performance was analyzed and discussed in Appendix A.7, on human and phantom EEG.
For phantom EEG that records only noise, the significant TA were found at low frequencies and power line frequencies of power spectra.
For human EEG, the TA were observed across various frequencies.
Those results indicated the presence of TA in both noise and neural activity.
Importantly, the TA decay over time, following power-law scaling behavior.
As a result, when two continuous segments of EEG with different class labels are segmented into multiple samples,
the similarity within-class samples (which are temporally closer) will be higher than the similarity between-class samples,
adding each segment samples a unique "domain feature".
Classifiers would associate the class label with the domain features (as shown in Figure 1 and Figure 2 in the attached PDF), leading to overestimated decoding performance.
As mentioned earlier, TA always exist in EEG, which add unique domain features to continuous EEG segments with the same class label.
However, due to different experimental designs in various BCI tasks, TA have different effects on decoding.
In some BCI tasks, such as motor imagery and image decoding, rapid-design paradigms can be used to switch class labels frequently in a short time (e.g., a few seconds or less),
such that each trial is treated as one sample and temporally adjacent samples mostly have different class labels (unless by chance, adjacent samples have the same label).
As there is no mapping between domain labels and class labels among those samples,
the model will not overfit to domain features even during training.
In contrast, in some BCI tasks, such as emotion recognition and ASAD, requiring subjects to switch class labels every few seconds may not be reasonable.
In addition, in some BCI tasks, such as sleep stage classification and seizure detection,
the switch frequency of class label is uncontrollable.
In both of these latter cases, an experimental condition usually lasts for minutes.
Continuous EEG segments with the same label need to be split into many samples during training and testing.
Even when using reasonable data splitting strategies such as leave-subjects-out,
the model could still utilize domain features to distinguish different classes during the training stage,
thereby interfering with the learning of class-related features.
Due to the coupling of domain features and class-related features in an actual EEG dataset,
it is challenging to precisely quantify the impact of TAs on decoding tasks.
We will acknowledge this limitation in section 4.3.
**Q3: Reproducibility of Experimental Design**
The experimental design and data splitting strategies described in the manuscript are reproducible across different datasets and experimental conditions.
As described in the __Author Rebuttal__,
we further added two datasets for two new tasks: the BCIIV2a dataset for motor imagery decoding and the SIENA dataset for epilepsy detection task.
The high decoding pitfalls were observed on SIENA dataset.
However, on BCIIV2a dataset, the pitfalls were absent due to the use of rapid-design paradigm during EEG recording, which was in line with the explanation in the __Reply to Q2__.
This generalizability was supported by the autocorrelation analysis which suggested that the TA features were always existing in EEG data (as shown in the Appendix A.7).
**L1: Aternative strategies in real-world BCI applications (Limitations)**
In the submitted version, we did not clearly describe the implications of our work.
Our work indicated the necessity of reducing the impact of EEG TA on BCI decoding and give some suggestions on experimental design and model framework construction (detailed in __Author Rebuttal__).
We will emphasize these practical implications further in the revised Discussion section.
We did not explore implementing alternative strategies to prevent models from overfitting to domain features.
This is indeed a limitation of this paper, as noted in the limitation section.
In future work, we will consider using domain generalization to mitigate the impact of EEG TA on decoding.
---
Rebuttal Comment 1.1:
Title: To authors
Comment: Thank you for the authors' efforts and responses.
I have raised my rating to 7.
---
Reply to Comment 1.1.1:
Title: Official Comment by Authors
Comment: Thank you very much for your constructive review and recent reply. We are confident that your review has enhanced the paper's quality. | Summary: The paper investigates the potential overestimation of decoding accuracy in brain-computer interface (BCI) tasks that utilize EEG signals. The authors address concerns that high reported decoding accuracies may be attributed to the inherent temporal autocorrelation present in EEG signals rather than the actual decoding of neural responses to stimuli. It contributes to the field of BCI by identifying a potential source of bias in decoding performance, providing a novel dataset to study this issue, and emphasizing the need for careful experimental design to ensure the robustness and reliability of BCI systems.
Strengths: 1. This article explores the issue of Overestimated Decoding Performance Arising from Temporal Autocorrelations and verifies it through experiments, with both the expressed viewpoint and experimental process offering high enlightenment value to the BCI community.
2. The self-collected Watermelon EEG is interesting. The use of Watermelon EEG dataset to simulate EEG data without neural activity is a good method to isolate the effects of temporal autocorrelations.
3. The paper provides empirical evidence through experiments that show high decoding accuracies can be achieved even with non-neural datasets, suggesting that reported accuracies in BCI might be influenced by factors other than the models' ability to interpret neural information. The experiment is solid.
Weaknesses: 1. This article only covers image decoding, emotion recognition, and ASAD tasks, and to further substantiate the viewpoint presented in this paper, the use of more other tasks or datasets is recommended.
2. The presentation still needs improvement, such as Figures 1 and 2. Some technical terms may be ambiguous, such as “domain”, and should be given more rigorous and clear definitions.
3. The paper only uses a simple CNN (or some parts of this CNN) for EEG classification. A broader range of model testing (e.g. EEGNet and EEG Conformer) would contribute to enhancing the reliability of the research presented in this paper.
Technical Quality: 2
Clarity: 2
Questions for Authors: Please see the weakness.
Confidence: 4
Soundness: 2
Presentation: 2
Contribution: 3
Limitations: The authors have addressed some limitations.But there are still some questions. Please see the weakness.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We greatly appreciate your careful review and constructive suggestions.
We are pleased that you also mentioned "emphasizing the need for careful experimental design to ensure the robustness and reliability of BCI systems",
which is exactly what we intended to achieve with this work.
**W1: This article only covers image decoding, emotion recognition, and ASAD tasks, and to further substantiate the viewpoint presented in this paper, the use of more other tasks or datasets is recommended**
We have added two EEG datasets: the BCIIV2a dataset for motor imagery (MI) decoding and the SIENA dataset for epilepsy detection. We should note that in BCIIV2a dataset, researchers usually treat each domain as a sample, making the model not rely TA for decoding. We reorganized Watermelon Dataset and sparKULee Dataset,
obtaining WM-BCIIV2a, SK-BCIIV2a, WM-SIENA, and SK-SIENA.
Additionally, we also performed decoding on the five actual datasets: CVPR, DEAP, KUL, BCIIV2a, and SIENA.
All those results are presented in Table R1 in the attached PDF in the __Author Rebuttal__.
**W2: The presentation still needs improvement, such as Figures 1 and 2. Some technical terms may be ambiguous, such as “domain”, and should be given more rigorous and clear definitions.**
We have added subplot Figure 1d, Table R1, and updated the Figure 2 to better formalize the framework, which are presented in the attached PDF in the __Author Rebuttal__.
Table R2 is added to give a clear definition of some used term.
Table R3 is added to introduce the specific content of the term "domain" and "class" for each dataset.
We hope that these changes will help improve the presentation.
**Table R2: Definition of some used term**
| **Term** | **Definition** |
|--------------------------|:----------------------------------------------------------------------------------:|
| **Class** | Distinct category that represents a specific EEG experimental condition |
| **Class-related feature** | EEG patterns arising from experimental condition |
| **Domain** | A segment of continuous EEG data with the same class label |
| **Domain feature** | EEG patterns of samples in a domain, arising from temporal autocorrelation |
**Table R3: The specific content of the term "domain" and "class" for each dataset.**
| **Dataset** | **Domain** | **Class** |
|-------------|:-----------------------------------------------------:|:----------------------------------------:|
| **CVPR** | One of 40 blocks, each lasting 25 seconds | One of 40 image categories |
| **DEAP** | One of 40 trials, each lasting 60 seconds | One of 4 emotion categories |
| **KUL** | One of 8 trials, each lasting 6 minutes | Attention to the left or right direction |
| **BCIIV2a** | One of 576 trials, each lasting 4 seconds | One of 4 motor imagery conditions |
| **SIENA** | One of 4-20 EEG segments, each with varying duration | Epileptic or non-epileptic |
**W3: The paper only uses a simple CNN (or some parts of this CNN) for EEG classification. A broader range of model testing (e.g. EEGNet and EEG Conformer) would contribute to enhancing the reliability of the research presented in this paper**
We used a simple CNN for EEG classification to demonstrate that the domain features are easy to be learned,
even with a single-layer CNN.
However, we agree that employing more models could be helpful to enhance the reliability of our research.
We added two classical models, EEGNet [1] and EEG Conformer [2] for the EEG classification tasks.
As shown in Table R1, these models exhibit results similar to those of the simple CNN,
except EEGNet's extreme results in CVPR dataset.
In most cases, the CNN outperformed EEGNet and EEG Conformer.
We must note that EEGNet and EEG Conformer were not designed for learning domain features.
On the contrary, they have achieved outstanding performance in some datasets where domain features are not shared between training and testing sets.
For instance, on the BCIIV2a dataset for motor imagery decoding, EEG Conformer has achieved the highest performance although no EEG preprocessing has been performed.
[1] Lawhern et al., J. Neural Eng., 2018, doi: 10.1088/1741-2552/aace8c.
[2] Song et al., IEEE Trans. Neural Syst. Rehabil. Eng., 2023, doi: 10.1109/TNSRE.2022.3230250.
---
Rebuttal Comment 1.1:
Comment: Thanks for the authors' efforts made for the rebutal. Most of my concerns have been addressed, and I have increased my score accordingly.
---
Reply to Comment 1.1.1:
Title: Official Comment by Authors
Comment: We sincerely appreciate your review and it really help improve our paper. | Summary: The authors have correctly identified a significant issue of numerous hyperbolic or irreproducible results in EEG decoding or classification tasks. However, their evaluation approach of recording signals from electrodes placed on a watermelon needs correction. The authors are advised to consult the definition of EEG, as a watermelon is not a brain and does not generate any electrical signals. Therefore, the recorded electrical noises, even when amplified using equipment typically used for EEG, do not constitute EEG data. In summary, while the authors' intentions were good, the numerous errors in their approach make it unacceptable for publication at a top conference such as NeurIPS.
Strengths: An excellent intention to discuss problems with many overblown EEG decoding publications. Yet the conclusions are obvious and many reputable researchers defend their approaches with leave-one-out-subject evaluations to avoid the obvious issues in training and testing data splitting identified by the authors.
Weaknesses: There were unacceptable errors in using EEG terms since instead some environmental or amplifier Brownian noises were recorded after placing electrodes on a watermelon, which probably acted as an electromagnetic antenna capturing all possible low-frequency noises in a room. The CNN application with data splitting issues is too basic for NeurIPS.
Technical Quality: 1
Clarity: 1
Questions for Authors: Why was a questionable watermelon selected with the incorrect EEG label? Could simply shuffling the labels of actual EEG experiments, leading to subsequent overfitting, not demonstrate the authors' hypothesis of overfitting pitfalls in ML approaches?
Confidence: 5
Soundness: 1
Presentation: 1
Contribution: 1
Limitations: Watermelon cannot produce EEG, even if an EEG amplifier records some electrical noise.
The presented study thus hardly relates to EEG decoding problems but seems to report on obvious issues in machine learning due to erroneous data splitting into training and testing sets, thus making it too trivial for NeurIPS.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 2
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your comments.
It is a pity that presentation of the key points and the specific contribution of this work was not clear enough for you,
but we hope our responses to your comments and questions can make it clearer.
**S1: Many reputable researchers defend their approaches with leave-one-subject-out evaluations to avoid the obvious issues in training and testing data splitting.**
The strategy of "leave-one-subject-out" is sort of way to avoid the pitfall,
as mentioned in the work such as ManyDG [1] and VBH-GNN [2].
However, due to individual differences, training one model for each subject could be more effective. If the dataset size is comparable, it has been known that training models on the subjects' own data could get better decoding performance for those works that don't rely on EEG temporal autocorrelations (TA) for decoding [3].
Meanwhile, despite avoiding the pitfall, simply adopt the "leave-one-subject-out" strategy can not prevent the overfitting on domain features during training stage (shown in Table 5 and Appendix A.5).
Our work suggests that the key to reduce the impact of EEG TA on BCI decoding is to decouple class-related features from domain features in actual EEG dataset.
**W1: Unacceptable errors in using EEG terms since instead some environmental or amplifier Brownian noises were recorded.**
The term "watermelon EEG" can be changed to "phantom EEG" in the revised version.
Please notice that we have introduced the concept of "phantom EEG" when we first mentioned "watermelon EEG" in the Introduction (line 79-80).
The watermelon is widely used as "phantom head" due to its similar conductivity to human tissue,
similar size and shape to the human head, and ease of acquisition.
The noises you mentioned can also be recorded when EEG is recorded with human being subjects.
More details about "The rationality for using watermelon" could be found in __Author rebuttal__.
**W2: The CNN application with data splitting issues is too basic.**
This work is not about "CNN application with data splitting issues".
Our work concentrates on the impact of EEG temporal autocorrelations (TA) on various BCI decoding tasks.
The CNN was chosen as the EEG encoder because its structure is simple enough to show that the EEG TA features are easy to be learned.
More details about the "The contribution of the present study" can be found in the __Author rebuttal__.
**Q1: Why was a questionable watermelon selected with the incorrect EEG label?**
The reason for selecting a watermelon with the manually selected class label is explained in the reply for __W1__ and
in the section "The rationality for using watermelon" of __Author rebuttal__.
The advantage of using phantom EEG is to control the interference from stimuli-driven response and subject-related factors,
and to focus solely on the TA of the noise in EEG. The similar method has been used in previous neuroscience studies.
**Q2: Could simply shuffling the labels of actual EEG experiments, leading to subsequent overfitting, not demonstrate the authors' hypothesis of overfitting pitfalls in ML approaches?**
As shown of Table R1 in the attached pdf, we added experiments as you suggested.
For the vast majority of BCI tasks, shuffling the labels of actual EEG experiments could demonstrate the hypothesis of overfitting.
However, when class labels and domain labels correspond one-to-one,
simply shuffling the labels of a real EEG dataset cannot illustrate the pitfalls.
For example, in the CVPR dataset, images of 40 classes are presented sequentially in 40 blocks.
Shuffling the class labels of these 40 blocks is equivalent to swapping the class labels of these 40 blocks in a classification task,
which does not affect classification accuracy.
Please note that our purpose is not merely for demonstrating the overfitting of TA features.
We used a unified framework to describe the mechanism of how TA affect EEG decoding across general BCI tasks
(detailed in "The contribution of the present study" in the __Author rebuttal__).
Therefore, using a phantom EEG that is independent of the stimulus, subjects, and experimental paradigm, is a more reasonable choice
(detailed in "The rationality for using watermelon" in the __Author rebuttal__).
**L1: Watermelon cannot produce EEG, even if an EEG amplifier records some electrical noise. The presented study thus hardly relates to EEG decoding problems but seems to report on obvious issues in machine learning due to erroneous data splitting into training and testing sets, thus making it too trivial for NeurIPS.**
The rationale for using watermelon has been presented in the __Author rebuttal__.
Our study is not about erroneous data splitting in machine learning.
Instead, we focused on the impact of EEG TA on EEG decoding, providing guidance on EEG experimental design and decoding model framework.
(detailed in "The contribution of the present study" in the __Author rebuttal__).
Meanwhile, the issue of "erroneous data splitting" is not "obvious".
The issue of TA in EEG decoding has not been widely recognized.
Some researchers argued that the effect of TA in EEG data on decoding is negligible and many works that utilized TA features to obtain "overestimated decoding performance" were still published.
As recently as last month, researchers proposed that TA can be controlled in experiments and their effects are marginal [4].
They even refused to acknowledge the issues with their data splitting.
[1] C. Yang et al. ManyDG: Many-domain Generalization for Healthcare Applications. In ICLR, 2023.
[2] C. Liu et al. VBH-GNN: Variational Bayesian Heterogeneous Graph Neural Networks for Cross-subject Emotion Recognition. In ICLR, 2024.
[3] Y. Song et al. Decoding Natural Images from EEG for Object Recognition. In ICLR, 2024.
[4] S. Palazzo et al. The effects of experiment duration and supertrial analysis on EEG classification methods. In IEEE Trans. Pattern Anal. Mach. Intell., 2024.
---
Rebuttal Comment 1.1:
Title: Thank you for detailed rebuttal
Comment: The reviewer appreciates the detailed rebuttal provided by the authors. While the manuscript seems well-written in terms of machine learning, the key issue highlighted is the lack of "EEG" in the "watermelon/phantom EEG." This absence has led the reviewer to give a very negative evaluation. The problem of EEG amplifier noise, whether from the electromagnetic environment or semiconductor noise, is well-known. Despite the authors' references to similar publications, the fact remains that there is no biological tissue in an electronic circuit in the case of the so-called "phantom EEG"; it simply represents electronic circuit noise. Although the electronic circuit is likely the sole source of these technical artifacts, the current manuscript does not address this issue. The reviewer suggests that if the authors were to remove "EEG" from the title and replace "phantom EEG" in the manuscript with "amplifier electronic circuit noise," this change might be acceptable. However, this would render the contribution trivial, given that these issues are widely recognized in the electrophysiological community, just like volume conductance and limitations of EEG amplifier noise.
The reviewer's assessment of the manuscript remains unchanged. However, the reviewer acknowledges the potential value of the content to the machine-learning community. If the remaining reviewers can convincingly demonstrate the necessity and educational value of the publication in the machine learning community after adding a precise discussion about EEG amplifier noise as potentially the main source of those TAs, the reviewer would be willing to withdraw from the evaluation process without changing the negative evaluation. This is because such a contribution from an electrophysiological perspective would appear trivial.
---
Reply to Comment 1.1.1:
Title: Official Comment by Authors to Reviewer hA7i
Comment: Thank you for your discussion on using "phantom EEG".
As you mentioned, the "phantom EEG" records amplifier noise (environment or semiconductor noise). Human EEG, while capturing signals arising from the subject (neural activity, artifacts, etc.), also records these noises. The analysis results on "phantom EEG" indicate that significant temporal autocorrelations (TA) exists even in the noise. We must point out that "phantom EEG" is just one of the datasets we used in this work, and the same analysis was also conducted on datasets of Human EEG (SparrKULee Dataset, see line 142-147 in the manuscript). As an extreme example, the "phantom EEG" was used to decouple neural activities from the recorded Human EEG, rather than to demonstrate the main source of TA. Hence, removing "EEG" from the title is not reasonable. We understand your point that the "phantom EEG" indeed does not contain any "EEG" but records only noise, and we will add the discussion about what is really recorded in "phantom EEG" to avoid confusion in the manuscript.
BCI is a highly interdisciplinary field in which both electro/magnetic signal analysis and neural decoding methods are crucial. Our current work mainly focuses on neural decoding methods. We present our framework, highlighting the TA-induced pitfalls in current BCI decoding methods, and propose ways to avoid the impact of TA on decoding. | Summary: Authors hypothesise that the high temporal correlation of EEG data contributes to the high BCI decoding accuracies reported in some prior BCI studies. Specifically, the highly questionable data partitioning practice of splitting continuous EEG data with the same label (or subject) across train/test sets. They present a framework to assess the impact of temporal correlation of EEG features on three different BCI decoding tasks applied to independent datasets, human and watermelon (phantom) EEG data. The inclusion of watermelon dataset is to separate the influence of stimulus-driven responses from highly correlated temporal EEG features that is not fully eliminated when using human EEG data. Results based on the standard data partitioning show high BCI decoding performance for the various tasks even when using watermelon EEG data, and performance is significantly reduced to around chance level when the impact of temporal autocorrelation is mitigated with alternative data partitioning schemes.
Strengths: **Originality**
- The inclusion of “phantom EEG” recorded from watermelon to disambiguate stimulus-driven neural responses and temporal autocorrelation during the data analysis, which is not fully eliminated with human EEG recordings.
**Quality**
- Authors provide a theoretical basis to justify their hypotheses and experiment design plan.
- Analysis plan includes data partitioning used in different BCI decoding tasks (image classification, emotion recognition, auditory spatial attention) applied to a different BCI task (speech evoked response).
**Clarity**
- The paper is generally well-written. Problem well illustrated in Figure 1.
- Some areas require clarity (maybe figures?) to better illustrate the different analyses in the framework (for other applications) and results.
**Significance**
- Highlights the need for more robust experimental design and data partitioning practices in BCI decoding tasks to minimise the impact of inherent temporal correlations of EEG data on performance.
- The paper demonstrates a limitation of deep learning models (“black box”) in relation to correlation vs. causation.
Weaknesses: • Adding the performance of the current framework on the actual datasets (CVPR, DEAP, KUL, if publicly available), as well as additional independent BCI datasets would provide other benchmarks for comparison.
• Not sure why there is a need to match number of the subjects in the SparrKULee dataset to that of the WM “subjects”. The objective of the study is to provide a framework for exploring the impact of temporal correlation of EEG features on BCI decoding performance, not directly comparing both datasets. So, it is fine to include data from all subjects in SparrKULee database.
> To match the number of subjects in the Watermelon EEG Dataset, EEG data from 10 subjects… from the SparrKULee Dataset were used.
Technical Quality: 4
Clarity: 3
Questions for Authors: **Questions**
- Does “chance level” consider the distribution of samples per class?
- Can the authors elaborate on the specifics/differences in the set up for “works that do not rely on EEG temporal autocorrelation features for” BCI task decoding (lines 312-315] vs. those presented in the paper?
- Figure 2: Add more details in caption to understand the content in (task, domain, label, etc.). Same for lines 258-262 in results. Need to include visualisation with/without leave-domains-out during training to support results in Table 1. Same for Table 2/Fig 2C.
**Suggestions**
- Recommend that authors better formalise the framework. Authors can introduce each section in a generalised setting, with the conditions tested used as specific example. This would allow for others to apply the generalised framework to their respective application. - Illustration of the three BCI decoding tasks and data reorganisation would be useful for a non-BCI audience.
- The discussion section introduces new analyses rather discussing the specifics/interpretations of the results presented earlier. While the additional analyses introduced in discussion is highly relevant, the extended presentation is an indication of a different placement. This is so that the discussion focuses more on interpretation/implications of the findings (referencing the relevant results) towards a conclusion of the paper.
- Table 5 (training, validation and test set splits) needs to be moved to the main text as it highlights an important aspect of the impact of data partitioning on performance.
- When introducing the watermelon dataset, authors should clarify that it was collected internally for disclosure. This is mentioned towards the end of the paper and may be missed. Authors should also include documentation in the data repository (zenodo) to describe their dataset (including motivation).
- Describe the dataset reorganisation and decoding task for DEAP and KUL as was done for CVPR (lines 154-164). (Found in Appendix, need to move to main text to enhance understanding.)
- Proofread for typos and grammar (not an exhaustive list)
o “researches segment the EEG data”
o Table 1: “TCL (chance level)”
o “retravel task”
o Differentials in equations (2) and (3)
o “the class-related feature has none possibility”, “there is none class-related feature”
- Define InfoNCE (Table 3)
- Figure 3: While mentioned in caption, better if images are annotated as target and predicted images (can be at top)
Confidence: 4
Soundness: 4
Presentation: 3
Contribution: 3
Limitations: Authors acknowledge limitations of their work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: # Reviewer1 7nx7
We sincerely appreciate you for the thorough review and constructive comments.
We would like to express our gratitude to you for the high praise on the originality, quality, clarity, and significance of our work.
**W1: Adding the result of other BCI datasets and actual datasets**
As shown in Table R1 in the attached PDF,
we have added the performance of the current framework on the actual datasets
and two other datasets for motor imagery task and epilepsy detection task.
**W2: Number of the subjects in the SparrKULee dataset**
We acknowledge that we did not provide a suitable explanation.
Reorganizing the EEG data from SparrKULee subjects
into other datasets requires each subject to have sufficient long EEG recordings.
For example, for the KUL dataset, each subject completed 8 trails, with each trial lasting at least 6 minutes.
This requires that subjects in the SparrKULee dataset have at least 8 runs of recordings,
with each run lasting more than 6 minutes.
When reorganizing to the DEAP and CVPR datasets,
similar constraints also need to be considered.
Under these constraints, 29 out of the 85 subjects in the SparrKULee dataset meet the requirements.
As shown in Table R2, we have conducted the same experiment for all the other 19 subjects (SK19) and obtained similar (maybe higher) results.
In the updated version, we will use the results of all the 29 participants.
__Table R2: Experiment for SK19 and SK__
| |SK19-CVPR|SK19-DEAP|SK19-KUL|SK-CVPR|SK-DEAP|SK-KUL|
|:-:|:-:|:-:|:-:|:-:|:-:|:-:|
|DLC|93.68±1.51|89.69±1.48|100±0.00|69.83±2.98|72.70±1.36|100.00±0.00|
|DLC (chance level)|2.50|2.50|12.50|2.50|2.50|12.50|
|TLC-DF|-|85.13±1.91|100.00±0.00|-|76.19±1.80| 100.00±0.00|
|TLC-EEG|93.68±1.51|84.65±2.67|94.92±1.91|69.83±2.98|74.44±2.76|93.34±2.01|
|TLC-EEG-woDO|-|25.13±4.39|53.66±7.60|-|25.34±1.85|59.32±4.07|
|TCL (chance level)|2.50|25.00|50.00|2.50|25.00|50.00|
**Q1: “chance level”**
The sample distribution per class is balanced.
The chance level is determined by 1/class_num.
**Q2: Elaborate on the specifics/differences in the set up for BCI task decoding whether relying on EEG temporal autocorrelation**
As we concluded in the __Author rebuttal__, when a segment of continuous EEG data with the same class label are
divided into the training set and test set, the model might rely on EEG temporal autocorrelation (TA) features for decoding.
For example, in the CVPR2017 dataset, 50 trials with the same class label are presented consecutively,
and these trials are randomly divided into the training set and test set.
Decoding tasks on this dataset rely on EEG TA features.
Conversely, for work NICE-EEG [1], the Things-EEG [2] dataset was used.
The EEG data used for testing was completely separated in time from the EEG data used for training.
Segments of continuous EEG data with the same label will be only in the training set or the testing set.
For VBH-GNN [3] and ManyDG [4], the "leave-subjects-out" data splitting strategy was used,
where training and testing data were from different subjects.
This also ensured that segments of continuous EEG data with the same label will be only in the training set or the testing set.
**Q3: More details in caption and visualization**
Thank you very much.
The updated Figure 2 can be found in the PDF file.
Lines 258-262 in results will be updated in the new version to give a detailed description of the presented figure.
**S1: Better formalise the framework**
We have added a subplot Figure 1d, Table R1, and updated the Figure 2 in the attached PDF to better formalize the framework.
Table R3 is added to introduce the specific content of the term "domain" and "class" for each dataset.
We hope these changes will help non-BCI audiences better understand our generalized framework.
**Table R3: The specific content of the term "domain" and "class" for each dataset.**
|**Dataset**|**Domain**|**Class**|
|:-:|:-:|:-:|
|**CVPR**|One of 40 blocks, each lasting 25 seconds|One of 40 image categories|
|**DEAP**|One of 40 trials, each lasting 60 seconds|One of 4 emotion categories|
|**KUL**|One of 8 trials, each lasting 6 minutes|Attention to the left or right direction|
|**BCIIV2a**|One of 576 trials, each lasting 4 seconds|One of 4 motor imagery conditions|
|**SIENA**|One of 4-20 EEG segments, each with varying duration|Epileptic or non-epileptic|
**S2: Suggestion about discussion**
The discussion is the limited due to page constraints.
In the updated version, we will discuss "work that relies on and not rely on EEG temporal correlation" (Q2)
and add further discussion about the effect of EEG temporal autocorrelations (TA) on EEG decoding
and how to avoid the effect of EEG TA.
**S3-S6,S8: Suggestion about Table 5 (S3), information about watermelon dataset (S4), reorganization (S5), proofread for typos and grammar (S6), caption for figure 3 (S8)**
We are delighted to accept these suggestions in the updated version.
**S7: Define InfoNCE**
The InfoNCE loss in a batch is defined as follows:
$
\begin{aligned} Loss=\frac{1}{N}\sum_{i=1}^N\log\frac{\exp(z_i\cdot v_i/\tau)}{\sum_{j=1}^{N}\exp(z_i\cdot v_j/\tau)} \end{aligned}
$
where $N$ is the batch size,
$z_i$ is the latent representation of EEG $i$ extracted by the EEG encoder,
$v_i$ is the feature of the image corresponding to EEG $i$,
and $\tau$ is the temperature parameter.
We will add the definition in the updated version.
[1] Y. Song et al. Decoding Natural Images from EEG for Object Recognition. In ICLR, 2024.
[2] A. T. Gifford et al. A large and rich EEG dataset for modeling human visual object recognition. In NeuroImage, 2022.
[3] C. Liu et al. VBH-GNN: Variational Bayesian Heterogeneous Graph Neural Networks for Cross-subject Emotion Recognition. In ICLR, 2024.
[4] C. Yang et al. ManyDG: Many-domain Generalization for Healthcare Applications. In ICLR, 2023.
---
Rebuttal Comment 1.1:
Comment: Author rebuttal included a more defined framework for their analysis, additional visuals, clarity on the number of subjects in the SparrKULee dataset, and results from additional EEG datasets. Authors also updated the dataset repository with more information about the dataset within the anonymous constraint of the submission. Authors should refer to FAIR principles for data sharing (https://www.go-fair.org/fair-principles/), and also include information about packages to read the data files (.cnt). Reviewer concerns are mostly addressed. Revising rating.
---
Reply to Comment 1.1.1:
Title: Official Comment by Authors
Comment: Thank you very much for your constructive suggestions and recent strong support. We will organize the datasets to comply with the FAIR principles and provide the packages to read the data files recently. More information about the datasets will be disclosed after the double-blind review period concludes. | Rebuttal 1:
Rebuttal: Thanks to all the reviewers for their valuable suggestions and for recognizing our intention to reveal the fatal drawback of using DNN on general BCI decoding tasks.
Although three reviewers rated the contribution of this work as "good",
it is a pity that the other one concluded it in an inappropriate way.
Perhaps the usage of term "watermelon EEG" is not precise enough.
However, please note that this term is not the key point of our work,
and phantom EEG collected from watermelons has been widely used in neuroscience studies (see details below).
The critical issues of reviewers' questions are summarized here, and the four reviewers' comments have been replied one by one.
We hope that you will recognize the true contribution of this work,
and we are willing to answer any questions during the discussion period.
We have attached a PDF file containing new tables and figures referenced in our detailed responses below.
**The contribution of the present study**
This work is not about "CNN application with data splitting issues" (Reviewer hA7i).
Instead of focusing on CNN application, our work concentrates on the impact of EEG temporal autocorrelations (TA) on various BCI decoding tasks.
The CNN was chosen as the EEG encoder because its structure is simple enough to show that the EEG TA features are easy to be learned.
Of course, TA features can also be learned by more complex models,
and we have added the experiments suggested by other reviewers (see details below in "additional experiments") for the further confirmation.
In our framework, we proposed the concept of "domain" to represent the EEG patterns resulted from TA and then used phantom EEG to remove stimulus-driven neural responses for the verification.
The results confirmed that the TA, always existing in the EEG data, added unique domain features to a continuous segment of EEG.
The specific finding is that when the segment of EEG data with the same class label are split into multiple samples,
the classifier will associate the sample's class label with the domain features,
interfering with the learning of class-related features.
This leads to an overestimation of decoding performance for test samples from the domains seen during training,
and results in poor accuracy for test samples from unseen domains (as in real-world applications).
Furtherly, our work suggests that the key to reduce the impact of EEG TA on BCI decoding is to decouple class-related features from domain features in actual EEG dataset,
rather than only to simply adopt the strategy of "leave-one-subject-out" (mentioned by Reviewer hA7i), which is quite limited for practical application of BCI.
**The rationality for using watermelon**
The watermelon was served as phantom head that we have stressed in the paper (line 79).
The term "watermelon EEG" can be changed to "phantom EEG" to avoid confusion.
The usage of phantom head allows researchers to evaluate the performance of neural-recording equipment and proposed algorithms
without the effects of neural activity variability, artifacts, and potential ethical issues.
Phantom heads used in previous studies include digital models [1], human skull [2], artificial physical phantoms [3], and watermelons [4]–[7].
Due to their similar conductivity to human tissue, similar size and shape to the human head, and ease of acquisition, watermelons are widely used as "phantom heads".
Previous studies have shown that EEG signals exhibit TA,
which arises from baseline drift and long-range temporal correlations of neural oscillations.
In many BCI datasets, the domain feature caused by TA and the class-related features driven by the stimulus were coupled.
However, it is highly argued that whether the impact of EEG TA is important,
and whether it should be accounted for decoding,
which is a bone of contention for using DNNs in BCI decoding tasks (see citation 8-10 and 15-18 in the manuscript).
The advantage of adopting phantom EEG is to control the interference from stimuli-driven response and subject-related factors.
And it is firstly found in our work that the phantom EEG exhibit the effect of TA on decoding even when only noise was recorded,
indicating the inherent existence of TA in the EEG data.
As commented by the reviewer S3fr, the framework we proposed and verified with phantom EEG data has significant implications for BCI research.
**Additional experiments of other BCI tasks and actual datasets**
We have conducted the experiments suggested by reviewers (7nx7, AKrL and S3fr) with the BCIIV2a dataset [8] for motor imagery task and the SIENA dataset [9] for epilepsy detection task.
All results for the five actual datasets, shuffled datasets (actual EEG with shuffled labels), and their reorganized datasets are presented in Table R1.
Except CNN, two other classical models suggested by reviewer (AKrL) were also used for the verification, EEGNet [10] and EEG Conformer [11].
The conclusion is consistent with that drawn in the manuscript. All those additional experiments results will be added in our updated version.
[1] Wolters et al., NeuroImage, 2006, doi: 10.1016/j.neuroimage.2005.10.014.
[2] Gavit et al., IEEE Trans. Biomed. Eng., 2001, doi: 10.1109/10.951510.
[3] Oliveira et al., J. Neural Eng., 2016, doi: 10.1088/1741-2560/13/3/036014.
[4] Mandelkow et al., NeuroImage, 2007, doi: 10.1016/j.neuroimage.2007.04.034.
[5] Perentos et al., IEEE Trans. Biomed. Eng., 2013, doi: 10.1109/TBME.2013.2241059.
[6] Schaefer et al., NeuroImage, 2011, doi: 10.1016/j.neuroimage.2010.05.084.
[7] Collins et al., IEEE Trans. Med. Imaging, 1998, doi: 10.1109/42.712135.
[8] Tangermann et al., Front. Neurosci., 2012, doi: 10.3389/fnins.2012.00055.
[9] Detti et al., Processes, 2020, doi: 10.3390/pr8070846.
[10] Lawhern et al., J. Neural Eng., 2018, doi: 10.1088/1741-2552/aace8c.
[11] Song et al., IEEE Trans. Neural Syst. Rehabil. Eng., 2023, doi: 10.1109/TNSRE.2022.3230250.
Pdf: /pdf/aa6a560c9822d1c398c3cbcdc12a41f5fcbc4569.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Improving the Learning Capability of Small-size Image Restoration Network by Deep Fourier Shifting | Accept (poster) | Summary: The authors introduce a theoretically sound Fourier shifting operator designed to enhance the learning capability of small-size image restoration models. This work represents the first comprehensive attempt to model the shift mechanism within the Fourier domain. The proposed operator is versatile and can be seamlessly integrated into existing image restoration networks, offering a flexible plug-and-play solution.
Strengths: 1, The proposed deep Fourier shifting operator is theoretically plausible and enhances learning by transforming spatial shifting into an information-preserving Fourier cycling manner. This novel approach models the shift mechanism in the Fourier domain comprehensively.
2, Integrating Fourier shifting into existing image restoration methods improves performance and reduces model parameters, demonstrating practical effectiveness and efficiency.
3, The paper is well-organized and easy to follow. The clear presentation and structure help us understand the progression from problem formulation to the proposed solution.
Weaknesses: 1, The authors design two Fourier shifting variants. While both improve performance, neither consistently outperforms the other across different baselines. It would be helpful if the authors could suggest which variant to choose for different baselines or application scenarios, and explain the differences between the variants.
2, In Fig.7, the authors present the generalizability of the shifting displacement effect from LOL to Huawei. I wonder why the proposed Fourier shifting has better generalization ability, and the authors are encouraged to provide some underlying working mechanisms.
Technical Quality: 3
Clarity: 3
Questions for Authors: Please see the weakness part.
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **1,generalization.**
Fourier shifting exhibits enhanced generalization due to its operation in the frequency domain, which captures global image information and maintains feature integrity while minimizing domain-specific artifacts. This approach is particularly advantageous for managing low-light degradation, a global phenomenon effectively addressed in the Fourier domain. By incorporating this domain-specific knowledge, our method benefits from a natural prior that significantly improves adaptability across diverse datasets. Moreover, Fourier shifting is more parameter-efficient and reduces artifacts compared to spatial shifting, which often suffers from issues like information loss and frequency aliasing. This leads to more robust feature extraction and consistent performance across varied scenarios. Our revised paper will thoroughly discuss these advantages, including the Fourier domain's role in addressing global degradations and how it contributes to the superior generalization of our method, ensuring better results across different real-world applications.
**2,variants.**
Like the spatial convolution techniques, our proposed “Fourier Up-Sampling” is a drop-in alternative for convolution operator, also the general strategy and each variant thus does not prefer a specified vision task. The two Fourier shifting variants each have specific strengths and potential drawbacks. The Amplitude-Phase Variant processes amplitude and phase components separately, which is beneficial for tasks requiring precise phase preservation. However, this variant involves trigonometric functions, which can introduce numerical instability due to the amplification of small deviations in angle, affecting precision-sensitive applications. On the other hand, the Real-Imaginary Variant processes the real and imaginary parts separately, offering greater numerical stability and being suitable for tasks involving complex textures or fine-grained restoration. In our revised paper, we will provide detailed guidance on choosing between these variants based on the task requirements, baseline characteristics, and potential numerical stability issues.
---
Rebuttal Comment 1.1:
Title: After rebuttal
Comment: Thanks for the rebuttal. This rebuttal has addressed my concerns. The proposed "Deep Fourier Shifting Operator" is effective, which is proven both mathematically and practically. Taking its contributions to the future community for various vision problems into consideration, I will keep my positive rating. | Summary: This paper addresses challenges in current image restoration methods, which are often too computationally demanding for edge devices. It proposes Deep Fourier Shifting, a novel approach inspired by spatial-shift operators adapted for low-level image tasks. By leveraging the Fourier domain and ensuring information preservation through cycling, Deep Fourier Shifting introduces two variants—amplitude-phase and real-imaginary—designed to replace conventional convolution units in existing networks with fewer parameters. Extensive experiments across denoising, low-light enhancement, guided super-resolution, and de-blurring tasks demonstrate consistent performance gains and reduced computational overhead, validating the robustness and efficiency of the proposed approach.
Strengths: 1.This paper is well-structured and effectively organized. It provides a clear introduction to the challenges faced by current image restoration methods on edge devices due to computational limitations.
2.The introduction of deep fourier shifting as a technique for small-size image restoration networks is both feasible and holds practical significance. By leveraging spatial-shift principles in the Fourier domain, the method addresses the computational constraints of edge devices while maintaining or improving restoration performance. This approach not only enhances the efficiency of image restoration tasks but also facilitates practical deployment in real-world applications where computational resources are limited.
3.This paper demonstrates promising results for the deep fourier shifting method in various small-size image restoration tasks, including denoising, low-light enhancement, guided super-resolution, and de-blurring. Experimental evaluations consistently show performance improvements with reduced computational overhead compared to traditional convolutional approaches.
Weaknesses: 1.While the proposed method shows promise for deployment on edge devices due to reduced computational requirements, potential practical implementation challenges should be addressed. These might include considerations such as memory usage, real-time processing capabilities, and adaptability to different hardware platforms. Discussing these aspects would enhance the manuscript's practical relevance and feasibility in real-world applications.
2.While the Deep Fourier Shifting method shows promising results across multiple low-level image restoration tasks, including denoising, low-light enhancement, and super-resolution, it would be valuable to explore its performance across a broader range of image types and degradation levels. Assessing how well the method handles complex real-world scenarios with varying textures, structures, and noise characteristics would strengthen the manuscript's applicability.
3.I noticed that the references primarily include works up until 2022. To ensure the manuscript reflects the latest advancements in the field, I recommend incorporating relevant studies published in 2023 and onwards.
Technical Quality: 3
Clarity: 3
Questions for Authors: See the above weaknesses part.
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The author has discussed the limitations of the method.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **1, types.**
We appreciate the suggestion and will expand our experiments to include various types of images with different textures, structures, and noise characteristics. This will provide a more comprehensive assessment of how Deep Fourier Shifting performs under a wider array of conditions, enhancing the manuscript’s applicability and robustness.
**2, Complexity.**
We appreciate the feedback on the practical implementation of our proposed method on edge devices. To address potential challenges, we will discuss key aspects such as memory usage, real-time processing capabilities, and adaptability to different hardware platforms. This will include information on memory requirements, real-time processing metrics like latency and throughput, and evaluations on various hardware configurations, including CPUs, GPUs, and specialized edge processors. By addressing these aspects, we aim to enhance the manuscript's practical relevance and demonstrate the feasibility of deploying our method in real-world edge device applications. Due to time constraints, we tested our method on the DnCNN model. In Table 5, we compared our Fourier shifting operator with the spatial shift operator and the baseline 3x3 convolution. Our Fourier shifting approach not only halved the number of parameters but also improved network performance, whereas the spatial shift operator caused a significant 14 dB performance drop. Additionally, we assessed runtime and FLOPs to evaluate suitability for edge devices. Our results show that the DnCNN-18 model with Fourier shifting achieved an average runtime of 7 milliseconds and 1.5 GFLOPs per 256x256 image on an NVIDIA GTX 1080 GPU, compared to 15-20 milliseconds and 3.9 GFLOPs for the baseline. This substantial reduction in both runtime and FLOPs highlights the efficiency of our method for resource-constrained hardware.
**3, onwards.**
Thank you for your observation. We will update the manuscript to include recent studies and advancements published in 2023 and onwards. Due to time constraints, we have conducted additional experiments using the recent work LFormer [1] presented at ACM-MM 2024. Specifically, we replaced the convolution operator in feature extraction with Fourier shifting to evaluate its impact on guided image super-resolution on world-view-II and Gaofen-2 satellites
| | | WV2 | | | | | GF2 | |
|--|---|-----|----------|------------|-----------|-------------|----------|------------|
| Method | SAM | ERGAS | Q8 | PSNR | SAM | ERGAS | Q4 | PSNR |
| LFormer | 2.8985 | 2.1645 | 0.9193 | 39.0748 | 0.6481 | 0.5778 | 0.9851 | 44.1958 |
| LFormer+ | 3.1576 | 1.7046 | 0.8630 | 39.2988 | 0.5325 | 0.4878 | 0.9904 | 45.4331 |
Incorporating our proposed Fourier shifting design into the LFormer framework has demonstrated significant improvements in performance. The results indicate that our method not only enhances the accuracy of guided image super-resolution but also improves the efficiency of the feature extraction process. This advancement is evident from the substantial gains in quantitative metrics, underscoring the effectiveness of our approach in optimizing and refining the image restoration capabilities of the model. Moving forward, we will incorporate additional methodologies to further validate and enhance the efficiency of our approach.
[1] Linearly-evolved Transformer for Pan-sharpening, ACM-MM 2024.
---
Rebuttal Comment 1.1:
Comment: Thank you for the author's response. I maintain my positive score. | Summary: The authors in this paper aim at exploring more into image restoration task via the concept of spatial shift operation that facilitates efficient spatial communication and has achieved significant advancements in several high-level vision tasks. As per their observation, since the image restoration is more spatial shift sensitive, they propose an information lossless shifting operator, i.e. Deep Fourier Shifting. Additionally in the case of the popular spatial shift operators, the regions subjected to shifting are filled with zeros which may lead to loss of information and a decline in the overall restoration performance.
The main contribution of the authors lies in proposing a novel deep shifting operator for low-level image restoration tasks, named Deep Fourier Shifting. This operator basically revisits and enhances the fundamental principles of the traditional shift operator by extending its application into the Fourier domain.
Strengths: 1. The authors delve into exploring the concept of shift operator for low level vision tasks which seems to be interesting.
2. Extensive experiments across multiple low-level tasks including image denoising, low-light image enhancement, guided image super-resolution, and image de-blurring demonstrate consistent performance gains obtained by our Deep Fourier Shifting while reducing the computation burden.
Weaknesses: 1. The contribution seems to be somewhat limited, the extension of spatial shifting into fourier domain and then proving its efficacy seems to be not a strong contribution.
2. The content in the paper does not go well with the title, "improving the small size restoration network by ....". If the reviewer understand correctly, there does not seem to be much focus on this concept or on this experiment. like if we look at the representative network for any 1 task, say low light image enhancement e.g SID, then there should be proper explanation as to how the proposed concept is working on it in terms of parameters, flops, inference time.
3. There are no real world datasets actually being considered as is claimed in the introduction part, as the results for deblurring, low light seem to be only on synthetic datasets.
4. Regarding the implementation detail part, it is very confusing to understand this part as what exactly is the baseline or the original model hasn't been explicitly mentioned.
5. Typos: The word field for receptive field has been incorrectly written as filed in many places.
Technical Quality: 2
Clarity: 2
Questions for Authors: 1. How is the mutual information shown in FIgure 6 (left ) calculated?
2. The feature maps in Fig 6(right) does not clearly show the removal of grid effects, is it possible to show some extra images.
Confidence: 5
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: The limitations could also be framed in this way that they did not show much on real world tasks or any experiments on edge devices.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **1, Fig.6.**
Thank you for your feedback. Firstly, the Fourier transform is an efficient tool that amplifies image degradations in the Fourier domain, and our learnable parameters act as filters to eliminate these artifacts. Secondly, the artifacts arise from two aspects: (1) features are extracted from low-light degraded inputs, which naturally reflect these degradations, and (2) the testing baseline network inherently uses down-sampling operators to achieve multi-scale features, where the down-sampling inherently causes frequency truncation, leading to frequency aliasing and ringing effects. Additionally, to verify the robustness of our experiments, we analyzed both shallow and deep features in the network, finding that similar artifacts persist and even accumulate, whereas our Fourier shifting can eliminate these, resulting in cleaner features. Our frequency domain approach inherently handles these degradations. The input discrepancies are due to the network being trained from scratch after incorporating our operator, resulting in natural feature changes. Our network effectively reduces degradations both before and after the application of shift-sa, while the degradation persists with shift-sa. Finally, we used mutual information to measure information loss caused by shifting, which validates the effectiveness of our method.
**2, MI.**
In Figure 6 (left), mutual information is calculated by first extracting feature maps before and after applying the shift operator and averaging the channel dimensions of these feature maps. These averaged values are then used as the variables for mutual information computation. Joint and marginal histograms of the pixel intensities are converted into probability distributions. This measure assesses the amount of shared information between the feature maps, indicating how well the shift operator preserves feature information.In terms of the index representing the network depth, we have tested multiple stages of the network, ranging from shallow to deep layers, to demonstrate the completeness of information retention. Relevant experiments have also confirmed that our proposed shifting method effectively preserves information throughout the network. Our operator exhibits significantly higher mutual information than Shift-sa, showcasing its efficacy in information preservation.
**3, real-world.**
Thank you for your feedback. Due to time constraints, we initially conducted experiments on real-world remote sensing satellite scenes, which involve more complex degradations compared to the simulated scenarios with simple down-sampling and blurring. The results were as follows:
- **PANNET**:
- D_{\lambda} = 0.0737
- D_s = 0.1224
- QNR = 0.8143
However, with our newly designed network, the updated metrics are:
- **New Network**:
- D_{\lambda} = 0.0716
- D_s = 0.1215
- QNR = 0.8156
Here, lower values of D_s and D_{\lambda}, along with a higher QNR, indicate better performance. These improvements demonstrate that our approach continues to perform effectively in real-world scenarios. We will update the paper to reflect these results and emphasize the effectiveness of our method in practical applications.
**4, comlexity.**
We understand that the paper’s content might not fully align with the title. To address this, we will revise the paper to better demonstrate how our method enhances small-size restoration networks. This revision will include a thorough discussion of its impact on parameters, FLOPs, and inference time, particularly for tasks such as low-light image enhancement using networks like SID. We will ensure that the paper clearly explains the application and evaluation of our proposed concept in this context. We also acknowledge that the implementation details might be unclear. To clarify, we will explicitly define the baseline or original model used for comparison, including a detailed description of the model architecture, configurations, and any modifications. Due to time constraints, we tested our method on the DnCNN model. In Table 5, we compared our Fourier shifting operator with the spatial shift operator and the baseline 3x3 convolution. Our Fourier shifting approach not only halved the number of parameters but also improved network performance, whereas the spatial shift operator caused a significant 14 dB performance drop. Additionally, we assessed runtime and FLOPs to evaluate suitability for edge devices. Our results show that the DnCNN-18 model with Fourier shifting achieved an average runtime of 7 milliseconds and 1.5 GFLOPs per 256x256 image on an NVIDIA GTX 1080 GPU, compared to 15-20 milliseconds and 3.9 GFLOPs for the baseline. This substantial reduction in both runtime and FLOPs highlights the efficiency of our method for resource-constrained hardware.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response.
I appreciate the efforts made by the authors to address the concerns. However I feel that the authors wrote the rebuttal in hurry without citing the mentioned work [PANNET] and comlexity for complexity in 4.
But, I am still worried about the contributions aligned with the title, as discussing these facts would increase the relevance of their proposed work. I still feel that manuscript needs to be polished alot , like the algos mentioned in section 2.1 could be moved into the supp and more details could be incorporated about the important facts like complexity. Additionally, I still feel that the paper is still an extension of the "When Shift Operation Meets Vision Transformer:An Extremely Simple Alternative to Attention Mechanism" in Fourier domain, thus I would like to keep my score. | Summary: This paper explores the use of a spatial-shift operator for image restoration tasks, addressing the issue of information loss identified through experimental analysis. The authors propose a shift operator in the Fourier domain, leveraging the periodicity and cycling properties of the Fourier transform to develop a theoretically information-lossless operator that enhances performance. Two variants of the operator, magnitude-phase and real-imaginary, are considered. Experimental results across various image restoration tasks demonstrate superior performance compared to baseline methods and the spatial-shift operator. Interestingly, one could replace multiple spatial shifts with the proposed Fourier-shifts to reduce the number of parameters and thus potentially reduce memory requirements.
Strengths: 1) The paper presents a conceptually straightforward and theoretically sound idea.
2) The proposed operator enhances the performance of image restoration tasks. Simultaneously, depending on its design, it could potentially reduce computational complexity.
Weaknesses: The lower portion of Table 5 indicates that the number of parameters remains constant when multiple Fourier-shift operators are substituted with each spatial-shift operator. However, crucial metrics such as runtime and FLOPs are not included. Solely relying on the number of parameters does not provide a comprehensive understanding of the ablation’s suitability for the ultimate objective of deployment on edge or mobile hardware.
Technical Quality: 3
Clarity: 3
Questions for Authors: In Figure 6-right, what is the cause of the artifacts present before the application of Shift-sa? Shouldn’t the two feature maps be identical prior to the application of either of the two operators?
Minor issues:
- In Table 4, the ERGAS metric value for the Shift-sa method on the WorldView-II dataset appears to be erroneously formatted in bold.
- Table 5 does not display any gray backgrounds in my PDF viewer. The statement on Line 233, “To further elucidate this point, values falling below the baseline are marked with a gray background, highlighting their relevance and impact,” ([pdf](zotero://open-pdf/library/items/9U2B2N4L?page=9)) may be misplaced.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: No comments
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **1, Complexity.**
Thank you for your feedback. In Table 5, we compared the performance of our proposed Fourier shifting operator against the spatial shift operator and the baseline 3x3 convolution. Replacing convolutions with Fourier shifting not only reduced the number of parameters by half but also improved network performance, whereas the spatial shift operator led to a significant 14 dB performance drop. Additionally, we evaluated runtime and FLOPs to assess suitability for edge devices. Our experiments show that the DnCNN-18 model with our Fourier shifting configuration achieves an average runtime of 7 milliseconds and 1.5 GFLOPs per 256x256 image, compared to 15-20 milliseconds and 3.9 GFLOPs for the baseline on an NVIDIA GTX 1080 GPU. This indicates a substantial reduction in both runtime and FLOPs, highlighting the efficiency of our method for deployment on resource-constrained hardware. We will update Table 5 to include these metrics for a comprehensive evaluation. The lower portion of Table 5 uses a controlled variable approach to ensure consistent parameters while evaluating the network's robustness to shifting size. It is important to note that the spatial-shifting configurations were not aligned with our method's parameters throughout. Our results demonstrate that our network exhibits high robustness to varying shifting sizes.
**2, artifacts.**
Thank you for your feedback. Firstly, the Fourier transform is an efficient tool that amplifies image degradations in the Fourier domain, and our learnable parameters act as filters to eliminate these artifacts. Secondly, the artifacts arise from two aspects: (1) features are extracted from low-light degraded inputs, which naturally reflect these degradations, and (2) the testing baseline network inherently uses down-sampling operators to achieve multi-scale features, where the down-sampling inherently causes frequency truncation, leading to frequency aliasing and ringing effects. Additionally, to verify the robustness of our experiments, we analyzed both shallow and deep features in the network, finding that similar artifacts persist and even accumulate, whereas our Fourier shifting can eliminate these, resulting in cleaner features. Our frequency domain approach inherently handles these degradations. The input discrepancies are due to the network being trained from scratch after incorporating our operator, resulting in natural feature changes. Our network effectively reduces degradations both before and after the application of shift-sa, while the degradation persists with shift-sa. Finally, we used mutual information to measure information loss caused by shifting, which validates the effectiveness of our method.
**3,minor issue.**
Thank you for your detailed review and valuable feedback. We will carefully review the entire manuscript to correct any typos and grammar errors to improve the overall clarity and quality of our work.
---
Rebuttal Comment 1.1:
Comment: I appreciate the authors’ responses to my questions. After reviewing all the feedback, I am particularly concerned about the responses to the issues raised by Reviewer 3jgW. While it is positive that the authors have acknowledged and addressed some of these issues, the necessity of making significant changes to the title suggests that the impact of the work may be less substantial than initially claimed. Consequently, I concur with the reviewer that the contribution may be perceived as more limited. Therefore, I am lowering the score to a weak accept. | null | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Can Transformers Smell Like Humans? | Accept (spotlight) | Summary: This paper mainly focus on the question "Can transformers smell like humans". By using a chemical structure transformer called MoLFormer to encode the odorant molecules, this paper proposes that the MoLFormer pre-trained representation can classify the odors of variety of molecules without careful fine-tuning. The method builds a relationship between odorant chemical strutures and their odors, and the odors are labeled by human-beings.
Strengths: 1. This paper tries to use pre-trained transformer to build the relationship between chemical structures and odors, and to some extent, shows that transformer has ability to do odorant classification.
2. This paper applies multiple kinds of databases to support the proposed argumentation, and the databases cover variety of labeling methods to label the odors. Also, experiments are included for every applied labeling methods.
Weaknesses: 1. The most important part of the odors classification system is the network, unfortunately, the paper only directly apply a well-designed transformer (MoLFormer) without any modification.
2. It looks like this paper dose not finetune the MoLFormer using the applied databases. The paper argues that the model can achieve good performance without fine-tuning, however, it is better to include experiments to support this argument.
Technical Quality: 3
Clarity: 2
Questions for Authors: 1. How many models (beside MoLFormer) does the research group consider to generate odorant representations?
2. Do the authors consider to fine-tune the MoLFormer for better performance?
3. Besides transformer, do the authers consider more kinds of models such CNN and graph network?
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: 1. It is better to do more analyses about the fine-tuning of the network.
2. It is better to consider to include some innovation in the existing models.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the comments and for the time spent reviewing our paper We now address the questions (Q), weaknesses (W), and limitations (L) raised by the reviewer:
* **(W2, Q2, L1) Fine-tuning results**: We thank the reviewer for the suggestion. We present the results of fine-tuning the complete MoLFormer model in **Figure III** of the general rebuttal document and will add these results to the paper. The change compared to MoLFormer is not significant in many cases. This raises interesting questions regarding the feasibility of fine-tuning pre-trained large-scale chemical models for human olfaction (and the method to do so).
* **(W1, L2) Model Architecture**: We agree that the development of novel network architectures for this task is interesting but, as it stands, we believe our work responds to a different, but also very relevant question. We explore whether representations extracted from pre-trained large-scale models on chemical data are aligned with human perception. Our finding suggests that pre-trained models are highly aligned with human perception of odorants **without being explicitly trained** on it, even when compared to supervised learning methods **explicitly trained** on human olfactory perceptual data.
* **(Q1, Q3) Models employed in our work**: In our work, we consider three types of models to generate odorant representations: (i) the MoLFormer, a self-supervised, *transformer-based*, large-scale pre-trained model of chemical data, which is not trained on human assessments of olfactory stimuli; (ii) the Open-POM, a supervised *graph neural network* model that is trained explicitly using human assessments of olfactory stimuli; (iii) the DAM model, a *feature-engineered* representation model that was specifically proposed to be used in prediction tasks of the human olfactory experience. To the best of our knowledge, Open-POM is the SOTA deep-learning model trained for prediction tasks of the human olfactory experience, and the DAM model is the SOTA feature-engineered model for prediction tasks of the human olfactory experience.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for answering my comments.
I believe that the authors' responses can somehow answer my questions, even though I still cannot be convinced completely. I would like to re-rate the paper to Borderline accept.
---
Reply to Comment 1.1.1:
Comment: Thank you for the discussion and for recognizing our efforts with the score raise. | Summary: This paper took a transformer model that was pre-trained on general chemical structures and tested whether the resulting model representations aligned with human olfactory perception. Specifically, the authors used a transformer for chemical structures called MoL-Former. MoL-Former was trained via masked token prediction loss. The representations from MoL-Former can predict odor labels from experts and odor ratings from non-experts (although the correlations between the model and rating scores is quite low). The representations are also highly correlated with physiochemical descriptors that are related to human olfactory perception.
The authors also compare the MoL-Former self-supervised transformer model to other baseline models (Open-POM, which is supervised with odor labels; and DAM, which predicts similarities between pairs of odors). MoL-Former generally performs comparably to these models, even though the baseline models are supervised and MoL-Former is self-supervised.
Strengths: Transformer models have been revolutionary in modeling sensory modalities of vision and audition, but olfaction is comparatively understudied. If olfaction can also be explained with transformer models, it indicates that the domain-general learning mechanisms of transformers can explain sensory processing in the brain more broadly. Put more simply, the results support a universal, domain-general learning mechanism for sensory processing across modalities. This, in and of itself, is an interest finding worthy of publication.
Another strength of the paper is that it relies on self-supervision to pre-train the transformer. This is important both practically and theoretically. From a practical standpoint, acquiring labelled datasets is challenging, whereas acquiring large-scale unlabelled datasets is becoming increasing more feasible. From a theoretical standpoint, this provides a better comparison to human olfaction than supervised models. Most human olfactory learning (and sensory learning more generally) is unsupervised. We are rarely given labels for odors, and animals are able to learn about odors without any verbal labels.
Finally, the inclusion of baseline models for comparison to the unsupervised transformer is important for interpreting their results. By comparing to OpenPOM and DAM, the author demonstrate that a self-supervised transformer can perform comparably to previous supervised models.
Weaknesses: Major
- Section 4.1 Expert labels classification was difficult to follow. How was dimensionality reduced to 20, through PCA perhaps? How was the same procedure applied to DAM, given that DAM already has less than 20 features? Why wasn't the visualization in Figure 3 also performed on DAM?
- Sections 4.2 & 4.3 need noise ceilings in order to be interpretable. The authors analyzed each model's predicted average rating with the actual average rating from participants. However, it is unreasonable to expect a model to be more correlated with average ratings than a typical individual human participant. The noise ceiling computes the average correlation between the human participants and mean performance to estimate a reasonable "ceiling" performance that could be expected from models. When the data are very noisy, the noise ceiling will be low (and the human benchmark will generally be less useful because of its low precision), and when the data are not noisy, the noise ceiling will approach 1.
Minor
- The first sentence of the paper references Damasio, 1989 and Meyer & Damasio, 2009. These are both fantastic papers, but they don't really fit the sentence where they are referenced. References like DiCarlo & Cox, 2007 and Olshausen & Fields, 2004 strike me as more relevant.
- A representational similarity matrix (RSM) would be a great visualization for Section 4.3. It would be really interesting to visualize how the patterns of similarity between different odorants compared across models and humans. You don't need to add it, but it would be helpful for readers to visualize how the models and humans are representing the different odors.
- The description of SMILES is a bit hard to follow. Could you provide an example and explain how (or if) the inputs to Open-POM, DAM, and MoL-Former differ?
Technical Quality: 3
Clarity: 3
Questions for Authors: Does the input to MoL-Former (and the baseline comparison models) correspond to olfactory receptors? If the input data from olfactory receptors does not match the input data to MoL-Former, then it's not actually completing the same task, and it becomes much harder to argue that the model can "smell like humans."
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: See weaknesses above.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the comments and for the time spent reviewing our paper. We now address the questions (Q) and weaknesses (W) raised by the reviewer:
* **(W1.1) Dimensionality Reduction**: In this work, we use PCA to reduce the dimensionality of the representations for MoLFormer and Open-POM. You are correct that DAM does not require PCA. We have clarified this in the updated version of the document.
* **(W1.2) DAM on Figure 3**: We thank the reviewer for the comment. We have added the t-SNE visualization of the DAM model **(Figure II. c)** in the general rebuttal document. We also added such visualizations using UMAP and PCA in the updated version of the paper. (We didn’t include them in the general rebuttal document due to the lack of space.)
* **(W2) Noise ceilings**: We thank the reviewer for the suggestion to use noise ceilings in Section 4.2 and Section 4.3. We have computed the noise ceilings for the "Sagar" and "Keller" datasets (as these are the only ones that have multiple evaluators for each odorant, and those are publicly available.), as shown in the following tables. The results show that we have noise ceilings of **$0.28\pm{0.1}$** for the Keller dataset and of **$0.7\pm{0.05}$** for the Sagar dataset. The results show that the data of the Sagar dataset is less noisy, and there is still room for the models to increase the alignment. However, the Keller dataset alignment results are relatively close to the noise ceiling value. We will also add the results divided by noise ceiling value in the appendix of the paper.
**Sagar dataset**:
| Bakery | Burnt | Cool | Decayed | Fishy|Floral | Fruity | Intensity | Musky | Pleasantness | Sour| Spicy | Sweaty | Sweet | Warm |
|:-------------:|:-------------:|-------------:|-------------:|-------------:|-------------:|-------------:|-------------:|-------------:|-------------:|-------------:|-------------:|-------------:|-------------:|-------------:|
| 0.68 | 0.70 | 0.68| 0.72| 0.75 |0.73 | 0.79| 0.75| 0.71 | 0.74| 0.66| 0.66 | 0.62|0.72| 0.61|
**Keller dataset**:
|Acid|Ammonia|Bakery|Burnt|Chemical|Cold|Decayed|Familiarity|Fish|Flower|Fruit|Garlic|Grass|Intensity|Musky|Pleasantness|Sour|Spices|Sweaty|Sweet|Warm|
|:-------------:|:-------------:|:-------------:|:-------------:|:-------------:|:-------------:|:-------------:|:-------------:|:-------------:|:-------------:|:-------------:|:-------------:|:-------------:|:-------------:|:-------------:|:-------------:|:-------------:|:-------------:|:-------------:|:-------------:|:-------------:|
|0.21|0.21|0.32|0.27|0.27|0.17|0.29|0.33|0.21|0.26|0.37|0.31|0.25|0.53|0.22|0.52|0.23|0.24|0.24|0.41|0.17|
* **(W3) Introduction references**: We do agree that DiCarlo & Cox, 2007 and Olshausen & Fields, 2004, are more suitable for the purpose of this sentence and, as such, we have followed the reviewer’s suggestion and replaced them in the updated version of the paper.
* **(W4) RSM analysis**: We thank the reviewer for the analysis suggestions. We provided the RSM figure only for the Ravia dataset and for MoLFormer, OpenPom and Human Perception in **Figure I** of the general rebuttal document, and we will add that for both the Ravia and Snitz datasets and also for the DAM model in the updated version of the paper (we don't include Snitz dataset and DAM model in the pdf due to the lack of space). We also added a table showing the correspondence of labels on axes with the real CIDs in the paper appendix.
* **(W5) SMILES/Input for the different models**: The SMILES representation encodes the three-dimensional structure of a chemical (e.g., a molecule) into a short ASCII string. It does so by first performing a depth-first tree search over the chemical graph, adding the elements, bonds, and rings, and subsequently removing hydrogen atoms and breaking cycles (e.g., water is represented by “O” in the SMILES notation). We use SMILES representations of odorants as input to the MoLFormer. For example, toluene is written as a string like "Cc1ccccc1". For the Open-POM, each odorant is represented as a graph, where each atom is represented as a node feature vector (e.g., valence, hydrogen count, atomic number…), and each bond is represented as a edge feature vector (e.g., degree, aromaticity, whether it is a ring or not). Finally, for the DAM model, we use a set of predefined physio-chemical features, which we extract using the AlvaDesc software. We have clarified these representations in the updated version of the document.
* **(Q1) Correspondences of Inputs**: We thank the reviewer for raising this point. In our work, “smell like a human” corresponds to whether representations of odorant chemical structures extracted from learning-based models are aligned with human olfactory perception (line 37). The underlying assumption to this question is that the "stimuli" provided to human participants are similar to the ones provided to the models using different types of representations. We agree that there are limitations that are not captured by our representation, such as the concentration and intensity of different molecules in an odorant, and we will try to discuss this better in the updated paper. But in our study, input to the olfactory receptors are the molecules of a specific odorant, and we used the representations of the exact same molecules as the input of the model.
---
Rebuttal Comment 1.1:
Comment: Thank you for providing these helpful explanations and revisions!
Any idea why there isn't much visible grouping by scent category (e.g., meaty, floral, ethereal) for the RSM visualization? I suppose the grouping from PCA, t-SNE, and UMAP must rely on how different features are weighted and combined.
Regardless of that question, I am very enthusiastic that the Sagar noise ceiling is reasonably high. You mention that you will "add the results divided by noise ceiling value in the appendix of the paper," and I ask that you please also include what the noise ceilings are for reference. This gives readers important information about the quality of each benchmark dataset.
Based on the authors' rebuttal PDF and responses to my review, I have decided to raise my score. I will modify my review to raise my rating from a 6 to a 7.
---
Reply to Comment 1.1.1:
Comment: We will add the definition of noise ceilings in the Appendix. Nice suggestion!
Interesting question about the RSM visualizations. Your hypothesis is an interest one. Indeed, we believe that the main directions used in PCA and the different ways the features are combined in t-SNE and UMAP are quite important in making these groups clearly distinct.
Finally, we thank the reviewer for the discussion and feedback. Also, thank you for recognizing our efforts with the score raise. | Summary: This paper investigates the ability of MOLformer, a model trained on SMILES strings in a BERT-like fashion, to predict the human assessment of odors. The assessments were based on natural language descriptors of odors by human experts, ratings on different NL descriptors by naive subjects, and finally, on similarity ratings between odors. It compares MOLformer, which was trained in an unsupervised manner, to an engineered model (DAM) and a supervised model Open-POM. Despite being trained in an unsupervised manner to represent molecules, MOLformer holds its own against the supervised models. In addition, MOLFormer shines when evaluated using an RSA technique, suggesting that MOLformer captures the relationships between molecular odors better than the other models.
Strengths: + Assessing models representations of odors is an understudied area. I believe this is the first work to do this.
+ quite a few datasets were used, providing binary, rating scale, and similarity judgment data. These might be the only ones available
+ The results are relatively convincing
Weaknesses: + Despite a rather systematic reporting of models and datasets, some things were left out of the descriptions. For example:
- Apparently the GS-LF datasets features are binary. I assume this means that that particular descriptor was used by at least 1 judge. So is it the case that if multiple judges used the same descriptor, the results were still recorded as binary?
- how dimensionality was reduced. I assume PCA. How much variance was captured in 20 PCs?
- There were apparently descriptors in the similarity dataset as well (line 123). These weren’t described.
+ the references in the introduction were weird. See below.
Typos, comments:
The references you use in the introduction seem rather bizarre. Two Damasio references for encoding sensory input into a high-dimensional space?
Reference #4 seems out of place in this context.
References 8-10 - you have no primary references here; for example, most would reference DiCarlo’s lab here.
Reference #12 is weird. Why not reference the original transformer model? And while it’s true that for language, they are self-supervised, the original use of them for vision was supervised. It is only later that self-supervised vision models (e.g., the Masked Autoencoder) came about.
line 53: Olfactory is mis-spelled!
Figure 2: You might try using UMAP or t-SNE here instead of (or in addition to) PCA.
In your results section, you never say what the train-test split was (i.e., 80/20?)
I was thinking - they should use RSA - and then you did! This is a very interesting result because, rather than just being able to predict human results, the model is better at both of the others in capturing the *relationships* between the odors, just from their chemical descriptions.
- Figure 6: Explain why didn’t you evaluate DAM here.
- You didn’t describe how the physicochemical descriptors were obtained.
Line 284: In section 4.4, it wasn’t clear that for this analysis, you were trying to discover the reasons for the alignment. You should say something about that when introducing that analysis.
lines 289-292: It makes sense to me that the low-level features of the molecules would be best encoded in earlier layers and not in later layers, where more abstract features would be encoded. This seems to fit well with the way vision models work.
Technical Quality: 4
Clarity: 2
Questions for Authors: See my questions above:
1. variance captured by PCA
2. How are the physicochemical descriptors derived?
3. Try UMAP or t-SNE
Confidence: 5
Soundness: 4
Presentation: 2
Contribution: 3
Limitations: These are discussed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the comments and for the time spent reviewing our paper. We address the questions (Q), weaknesses (W), and comments (C) raised by the reviewer as follows:
* **(W1) Description of GS-LF dataset**: GS-LF dataset is a multi-label binary dataset, where each data point (the “odorant”) has a set of labels (the “descriptors”) evaluated by a single expert evaluator (the “judge”). As such, no odorant has ratings from multiple judges, but the single judge was asked to assign labels to each odorant chosen among 138 descriptors. We clarified this point in the updated version of the paper.
* **(W2 and Q1) Dimensionality reduction method and variance captured**: In this work, we use PCA to reduce the dimensionality of the representations. We report the amount of variance explained by the first 20 PCs in the following table. The results show that our 20 PCs capture the majority of the variance of the data.
| Dataset / Method | Keller | Sagar | GS-LF|
| -------------|:-------------:|:-------------:|:-------------:|
|MoLFormer|${0.62 \pm{ 0.002}}$ |${0.67\pm{0.004}}$|${0.59\pm{0.000}}$|
|Open-Pom|${0.70 \pm{ 0.002}}$|${0.74\pm{0.003}}$|- (trained end-to-end)|
|Fine-tuned MoLFormer|${0.75\pm{ 0.001}}$|${0.78 \pm{ 0.003}}$|${0.75\pm{0.03}}$|
* **(W3) Descriptors in the similarity dataset (line 123)**: You are correct. Line 123 is not accurate. Similarity datasets do not include descriptors, and in this case, $y_i \in [a, b]^{n \times 1}$. We have corrected this line in the updated version of the paper.
* **(W4) References in the introduction**: We thank the reviewer for this comment. As pointed out as well by reviewer UQYS, we have replaced Damasio references by the work of DiCarlo & Cox, (2007) and Olshausen & Fields, (2004). We agree that reference [4] is out of place: in line 34 it should read “(...) impressive performance in various tasks such as control [4], image [13], (...)”. We have also added the reference to the work of Vaswani et al. (2017) as the original reference for the transformer. Moreover, we agree that [13] uses a supervised objective to train the transformer, and we also added the reference to He et al. (2022) on MAEs to the discussion on self-supervised image transformers.
* **(C1, Q3) UMAP and T-SNE visualizations**: We add the T-SNE figures to the attached rebuttal PDF. We also ran experiments with UMAP and will add it to the appendix of the paper (we don't include in the attached PDF due to lack of space). Overall, we found these figures to be harder to interpret than the PCA figure, which was already suggested in [20].
* **(C2) Train-test split**: In all evaluations in our work, we use a 80-20% train-test split, with 30 different randomly-seeded partitions. Moreover, we used nested cross-validation to estimate the hyper-parameters of the linear models. We have added this information to the beginning of Section 4 of the updated version of the paper.
* **(C3, Q2) Physio-chemical descriptors and Figure 6**: We extract the physio-chemical descriptors using the AlvaDesc software. These physio-chemical descriptors are those that are used as input features of the DAM model proposed by Snitz et al. (2013). As such, in Figure 6, we don’t evaluate the prediction capabilities of the DAM model, as the targets would be the same as its input features. We have clarified this point in Section 4.4 of the updated version of the paper.
* **(C4) Motivation of section 4.4 (line 284)**: We have clarified the motivation for the additional analysis in the updated version of the paper.
* **(C5) Alignment with low-level physio-chemical descriptors**: We thank the reviewer for the insightful comment on the results regarding the alignment of the model with physio-chemical descriptions, which we fully agree with. The reviewer pointed out correctly that in vision models, low-level features are extracted in the early layers of the model, and the same behavior can be observed here. We added this insightful interpretation in our final paper.
---
Rebuttal Comment 1.1:
Title: thanks for the response
Comment: Thanks for your response to my criticisms. Since I already gave the paper what I consider a relatively high score, I will leave it at that. I think it's a very interesting paper, showing that a self-supervised transformer trained on molecular structures (without anything specific to odors), can reasonably well predict odor ratings by humans. Good job, and I hope the paper is accepted.
---
Reply to Comment 1.1.1:
Comment: Thank you for your comments and feedback | null | null | Rebuttal 1:
Rebuttal: We thank all the reviewers for the constructive and interesting questions and also for the positive and encouraging feedback. We did the following extra experiments, analysis, and visualizations to answer the questions raised by the reviewers.
1. We visualized representational similarity matrices (RSM) for Ravia and Snitz datasets. **Figure I** shows results for the Ravia dataset
2. We applied two additional methods for reducing dimensionality: t-SNE and UMAP. **Figure II** shows the results obtained for t-SNE.
3. We fine-tuned the model and visualized the results of fine-tuning in the general rebuttal document, see **Figure III**.
You can find these results in the attached document.
Pdf: /pdf/b9b1ad6b4d6fa5ffb47b4bf97e549068ae97e7a7.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
The Edge-of-Reach Problem in Offline Model-Based Reinforcement Learning | Accept (poster) | Summary: This paper identifies an overlooked problem in offline model-based reinforcement learning (RL).
Offline model-based RL approaches commonly perform online RL inside a learned dynamics model and
due to model errors, they generally use truncated $k$ step rollouts, instead of full trajectories (episodes).
In this paper, the authors replace the learned dynamics model with the true simulator and show that offline model-based
RL approaches fail even when they have access to the true dynamics.
This is due to states which can only be reached at the end of the $k$ step rollouts (which the authors coin "edge-of-reach" states) and thus, only appear in the target of the Q update. As such, the Q-function is never updated at these "edge-of-reach" states so any overestimation is gradually propagated over the entire state-action space.
The authors provide a nice diagram which provides intuition and shows the effect in a simple environment.
They then propose an algorithm which overcomes the so-called "edge-of-reach" problem by using an ensemble of Q functions.
They demonstrate good performance in D4RL and V-D4RL.
Strengths: I think this is a good paper that should be accepted.
It identifies an overlooked problem in offline model-based RL which seems like it will be an important contribution to the community.
The authors have also done a nice job providing intuition for the problem via Figure 2 and the example in Section 4.
Based on the identified problem, the authors also propose a novel method which is simple yet effective.
They have also demonstrated the effectiveness of their approach on data sets with both proprioceptive and image-based observations.
Weaknesses: I have no major weaknesses. Nevertheless, I do have some comments which I provide to help improve the paper.
The main weakness of this paper is that it looks messy and overcrowded. There are multiple reasons for this which I will now detail.
The use of italics throughout the text makes the paper look very messy.
Sometimes the italics refer to questions whilst other times they just seem to be random words.
I would advise the authors to remove all italics.
There are too many subsections.
For example, in Section 6 there are 5 subsections and each subsection is a single paragraph.
I would advise the authors to replace \subsection with \paragraph.
Similarly, on Lines 283/293/303 the authors use unnumbered subsections. This looks very messy.
I suggest the authors replace this with \paragraph so that the text starts immediately after the bold text and does not have a line break before.
There are a lot of incorrect uses of colons, which makes the writing quality feel poor. Please remove the colons on lines 12/90/114/122/137/144/154/176/195/273/305.
The authors do not mention how the Q-function is initialized. I think it is very important to include all of this information.
For example, if we initialize the Q-function to be zero at all inputs wouldn't that help prevent overestimation bias from "edge-of-reach" states?
## Minor comments and typos
- Line 89 - $\\}\_{i=1,\ldots,N}$ should be $\\}\_{i=1}^{N}$
- Line 122 - incorrect use of semicolon. This should just be "and".
- Line 137 - the full stop should be outside the quote.
- Line 167 - $0\ldots k-1$ should be $0,\ldots,k-1$
- Line 170 - Why is the last sentence of the definition not italicised?
- Line 249 - the comma should be outside the quote.
- Line 306/308/309/31 - the dataset subscript looks wrong.
- Sections shouldn't lead straight into subsections, e.g. 6 into 6.1.
- Add a few sentences explaining what you are introducing in this section.
Technical Quality: 3
Clarity: 3
Questions for Authors: - What is on the x-axis of Figure 5? What environment is this from? This is not clear from the text or caption.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The limitations are adequately discussed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to thank the reviewer for their obvious extreme care in providing us with feedback. We are encouraged by the reviewer’s comments that the paper “should be accepted,” "will be an important contribution to the community,” and are grateful for their detailed comments which will help us improve our paper.
---
**Weaknesses - Formatting, punctuation, use of subsections, etc.**
We thank the reviewer for their detailed feedback on the organization and formatting of the paper. Having revisited the paper, we completely agree with the reviewer’s comments. We will use the extra page allowance to make the paper feel less crowded, and will make all of the suggested changes (use of italics, commas and colons, subsections, etc.) ahead of the camera-ready submission. We are grateful for the care taken in providing a detailed list of suggested edits, and would like to thank the reviewer again for their time in helping us improve the paper.
**Q1 - Implementation details of Q-function initialization**
We use the default initialization scheme (Kaiming uniform). We thank the reviewer for asking this and will add this implementation detail to the paper.
**Q2 - Effect of initializing the Q-function at zero**
[Please see the Rebuttal PDF.]
The reviewer asked whether the edge-of-reach problem could be solved by simply initializing the Q-function to be zero for all inputs. This is a good question and gets to the heart of the pathological issues involved in the edge-of-reach problem. Initializing a network so that its output is zero given any input is non-trivial. (The obvious way to do so would be to initialize all weights to zero, but the network would then not train.) Thus, we instead we consider what would happen if we initialize so that all Q-value outputs are very close to zero.
As the reviewer commented, in this case we might expect the edge-of-reach problem to be avoided since edge-of-reach states cannot then be a source of overestimation if their Q-values are significantly less than the optimal Q-values. However, as shown in the histograms in the rebuttal PDF, the standard NN initialization that we use already in all our experiments already results in all Q-values being very close to zero at initialization ($<1e-2$). Moreover, all these values are much lower than the optimal Q-values ($\sim 5e+3$) and much lower that the Q-values reached over training ($>1e+10$) (see Figure 3). Yet we still observe the edge-of-reach problem. This means the above line of reasoning does not hold.
Instead what happens is that, when values at within-reach states are increased, the use of NNs as function approximators means that the Q-values at nearby edge-of-reach states may also increase. Q-values at edge-of-reach states are therefore not guaranteed to remain small over training, and hence can become a source of overestimation. Moreover, overestimated Q-values lead to other nearby Q-values also being raised, which can then lead to the original Q-values being raised further, thus setting up a pathological positive feedback cycle whereby Q-values increase unboundedly over training (as seen in Figure 3). This is investigated and explained more thoroughly in [1] and [2] in the context of the analogous model-free offline problem (where function approximation means out-of-sample state-actions become a source of pathological overestimation).
We thank the reviewer for this important question. We will use the camera-ready extra page allowance to include some more of the detail from [1] on the role function approximation and this pathological positive feedback loop.
[1] - Aviral Kumar, et al. Stabilizing Off-Policy Q-Learning via Bootstrapping Error Reduction, 2019.
[2] - Sergey Levine, et al. Offline reinforcement learning: Tutorial, review, and perspectives on open problems, 2020.
**Q3 - Clarification on the $x$-axis of Figure 5**
The $x$-axis of Figure 5 shows variance over RAVL’s Q-ensemble. Since RAVL takes the minimum over this Q-ensemble (see Equation 2), the penalty being added is effectively proportional to the Q-ensemble variance, hence why we labeled the $x$-axis as “RAVL’s effective penalty.” We described this briefly in the caption, but we will make this clearer in the camera-ready version.
---
We thank the reviewer again for their careful review of our submission. The question about initializing the Q-function at zero was particularly insightful. We are also extremely grateful for the feedback about formatting and the detailed list of suggested edits. We agree with these and will make the suggested changes. Thank you for your help in improving our paper. Please let us know if you have any more questions or comments as we would be more than happy to discuss further.
---
Rebuttal Comment 1.1:
Comment: I would like to thank the authors for considering my comments and providing a detailed rebuttal. I have also taken the time to read the other reviews and comments. I believe this paper should be accepted so I stand with my initial score of 7.
One minor comment regarding Q-function initialisation. Could you not just initialise the Q-function by misusing it from itself and freezing the extra network, like,
$$Q(s,a) = Q_{\theta}(s,a) - Q_{\theta_0}^{\text{frozen}}(s,a).$$
I appreciate this would probably not solve the issue, but this is what I meant when I suggested initialising the Q-function to be zero everywhere.
---
Reply to Comment 1.1.1:
Comment: Thank you for your reply. We had not considered this initialization. As you said, it would still not be expected to solve the problem as the overestimation mechanism described in our original answer still holds.
We have run some experiments to confirm this. The table below shows the final performance and $Q$-values for 3 seeds of Walker2D-medium with MBPO with and without this initialization method. As expected, this does not solve the problem and we see the same Q-value explosion and no significant difference between these initialization methods. (For reference, expert performance is $\sim 100$, and correct Q-values of the optimal policy are $\sim 5e+3$.)
| *(3 seeds)* | Final Score | Final $Q$-values |
|:-:|:-:|:-:|
|Default initialization | (-0.2, 2.3, -0.1) | (4e11, 2e12, 5e12) |
|Suggested zero initialization| (4.1, -0.3, -0.7) | (6e11, 3e13, 1e11) |
We thank you again for your response and for your recommendation of acceptance for our paper! | Summary: The paper presents a novel analysis of the challenges faced by offline model-based reinforcement learning (RL) when the dynamics model becomes increasingly accurate. The central thesis is that existing methods fail under true, error-free dynamics due to a previously unaddressed issue termed the "edge-of-reach" problem. The authors provide a thorough investigation, theoretical analysis, and empirical evidence to support their claims. They propose Reach-Aware Value Learning (RAVL), a method that directly addresses this problem, demonstrating robust performance across benchmarks.
Strengths: 1. The paper is well-organized, with a clear abstract, introduction, and conclusion that effectively summarize the contributions and findings.
2. The authors have provided open-source code and detailed hyperparameter settings, facilitating the reproduction of the experiments.
3. Rational experiments on standard benchmarks and a simple environment validate the "edge-of-reach" hypothesis and demonstrate the effectiveness of RAVL.
Weaknesses: The methodology contribution of this paper is limited. While I assume that the "edge-of-reach" problem is significant to the current offline RL literature, the proposed method by the authors fails to demonstrate obvious superiority compared with the SOTA offline model-free or model-based methods (see Table 2 of this paper). One reason, I guess, is that the D4RL benchmark is too simple to underscore the superiority of addressing the "edge-of-reach" problem. On the other hand, I expect that authors can validate their method on more complex and challenging benchmarks and provide more convincing results. I am willing to reconsider my score after seeing this.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. I wonder how existing uncertainty-penalty-based offline model-based RL methods perform if I gradually increase the length of rollouts in the estimated dynamics model along with the improved accuracy of the model (a very intuitive trick).
2. Does the "edge-of-reach" hypothesis exist in many online RL scenarios? For example, researchers usually early stop the trajectory according to a pre-defined maximal length of the trajectory in the Gym Mujoco benchmark.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: The authors have discussed the limitations of their work properly.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to thank the reviewer for their careful feedback on our submission.
We are grateful that they appreciated our “thorough investigation, theoretical analysis, and empirical evidence to support [our] claims” and “robust performance across benchmarks.”
We found their questions particularly insightful (see below).
---
**W1.1 - Performance on D4RL**
The reviewer highlights that, on the D4RL benchmark, we only match SOTA rather than surpassing it. Below we discuss why this is expected and why we believe this does not affect the key contribution of our paper.
In Section 6.5 we describe why existing methods can sometimes be made to work despite misunderstanding the underlying issues in offline RL. (tl;dr: If there is a lucky correlation between dynamics uncertainty and EOR states, then the dynamics penalties of existing methods can indirectly address the true EOR problem.) For the common D4RL setup (on which many existing methods have been developed) this lucky correlation holds (see Figure 6), meaning existing methods can be made to work well, and we would not necessarily expect RAVL to beat them. (Further, the majority of SOTA scores are close to, or already exceed, expert level $>100$.)
In general, however, this lucky correlation is not guaranteed, and the indirect approach of existing methods means we may expect them to be very brittle to dynamics model changes. Indeed, in Figure 1 of the rebuttal PDF, we show that existing methods completely break down if the dynamics model is changed slightly. RAVL, by contrast, is much more robust due to directly addressing the true underlying EOR problem (see Figure 1 in the rebuttal PDF).
We believe the correction of a significant misunderstanding, and RAVL’s consequent much-improved robustness are highly valuable contributions to the community.
**W1.2 - Performance on more complex and challenging benchmarks**
We thank the reviewer for asking about more complex and challenging benchmarks. We would like to highlight our additional results on the pixel-based V-D4RL benchmark in Appendix F (moved to the Appendix due to space limitations).
Here RAVL represents a new SOTA, giving a performance boost of more than 20% for some environments. These results are particularly notable as the pixel-based setting means the base algorithm (DreamerV2) uses model rollouts in an imagined latent space (rather than the original state-action space as in MBPO). The results therefore give promising evidence that RAVL is able to generalize well to different representation spaces.
We will use the camera-ready extra page to feature these results in the main paper. We hope this answers the reviewer’s concern with a much higher-dimensional and more challenging benchmark.
**Q1 - What happens if we increase the rollout length as dynamics accuracy improves?**
We thank the reviewer for this interesting and insightful question. Tuning the rollout length as suggested is indeed what we tried first.
In the limit of perfect dynamics accuracy this of course works, since rollout length can be increased to full horizon and the procedure becomes online RL. For intermediate dynamics model accuracies, however, we experimented with this extensively but found we were unable to get existing methods to work.
Our intuition for why this is is as follows:
Consider a 2D environment like in Section 4, and imagine we have a dynamics model for which error and uncertainty is significant to the left, and negligible to the right. The model being imperfect means rollouts will need to be truncated. (All existing methods heavily truncate rollouts e.g. $k=1,5$ to avoid compounding errors, even with dynamics uncertainty penalties). However, rollout truncation means we now have EOR states.
The question is: *Is it possible to tune dynamics uncertainty-based methods to work in this setting?*
Uncertainty penalties can successfully penalize the EOR states to the left. However, it is impossible to get uncertainty penalties to target the EOR states to the right (since model uncertainty to the right is negligible).
This illustrates that we can have dynamics models for which it is theoretically impossible to get uncertainty-based methods to work, even allowing for tuning the rollout length. (RAVL, by contrast, would be expected to work in this setting since it directly ensures all EOR are targeted.)
The much stronger reliance on the dynamics model for uncertainty-based methods compared to RAVL (as highlighted by this setting) fits with our empirical observations of existing methods being much more brittle to changes in the dynamics model. See for example our new results (Figure 1 of the rebuttal PDF), in which we find that RAVL is significantly more robust, both when model accuracy is increased and decreased.
We thank the reviewer for asking this. We found it to be a very interesting and insightful question and would be more than happy to discuss further.
**Q2 - Does the "edge-of-reach" hypothesis exist in many online RL scenarios?**
Thank you for another insightful question!
The trajectory cutoff, for example $H=1000$ for MujoCo, can indeed be viewed as a truncated horizon. This means the EOR problem could in theory be present in online RL. We anticipate, however, that $H=1000$ is sufficient for the agent to be able to reach all possible states within this horizon, meaning that the set of EOR states will be empty and the EOR problem will not be present.
As briefly mentioned in the limitations section, however, we are interested in investigating whether truncated horizons could be used in some way in online RL to harness the EOR effect and induce an implicit exploration bias.
---
We thank the reviewer again for their valuable feedback and insightful questions. We hope we have addressed all of your concerns. Please let us know if you have any remaining questions. If we have been able to address your concerns we humbly ask if you would recommend acceptance.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response -- I appreciate all the explanations and additional results. I'm raising my score to 5.
---
Reply to Comment 1.1.1:
Comment: Thank you for your reply, and again for your helpful feedback that has helped us improve our paper. We deeply appreciate you raising your score. Please let us know if you have any remaining concerns and if there is anything we can provide in the discussion period that would enable you to raise your score further. | Summary: This paper identifies and investigates a previously neglected issue in offline model-based RL called the "edge-of-reach problem", which is due to the truncated rollouts used in model-based RL to mitigate the compounding errors of the dynamics model. The authors proposed Reach-Aware Value Learning (RAVL) to address the edge-of-reach problem using value pessimism. They show RAVL's strong performance on the D4RL and V-D4RL benchmarks.
Strengths: 1. The edge-of-reach problem is interesting and previously overlooked in offline model-based RL. This paper is the first to formally investigate the problem.
2. Comprehensive, well-designed experiments to support the claims.
Weaknesses: 1. The proposed method doesn't outperform existing baselines according to the reported results on D4RL. Although it's claimed that "RAVL can be easily combined with appropriate dynamics uncertainty penalization", it's not supported by any empirical evidences. Hence, it's unclear how well RAVL work in general when combined with other model-based methods with design choices orthogonal to value pessimism.
2. I found the paper a bit hard to read due to the formatting, e.g. too many long italics phrases.
Technical Quality: 3
Clarity: 2
Questions for Authors: 1. Why does RAVL not benefit from reduced dynamics errors according to table 1?
2. It would be interesting to show how RAVL's performance change by varying the accuracy of the dynamics model, similar to figure 1.
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their careful review of our submission.
We are encouraged that the reviewer found the problem to be “interesting,” and the experiments “comprehensive, well-designed”, and are grateful they appreciated that the work is “the first to formally investigate the problem”. We have addressed their questions and comments below.
---
**W1 - Combining RAVL with dynamics uncertainty penalization**
The reviewer asked for support of the claim that "RAVL can be easily combined with appropriate dynamics uncertainty penalization.”
In terms of implementation: this is extremely simple since dynamics penalization is typically a change to the dynamics model’s reward, while RAVL’s value pessimism is a change to the agent’s value function update.
In terms of empirical evidence: we refer the reviewer to the V-D4RL experiments. As stated in Appendix C.3, for the RAVL results we add RAVL’s value pessimism in addition to the dynamics pessimism already used in the base procedure. As such, the V-D4RL results are an example of combining RAVL’s value pessimism with dynamics uncertainty penalization, exactly as requested. This provides evidence that RAVL’s value pessimism is effective even when the model is less accurate and dynamics uncertainty is necessary. We will make the details in Appendix C.3 more prominent in the camera-ready submission.
**W2 - Formatting e.g. italics affecting readability**
Having looked over the paper again we completely agree with this. We will make sure to refine our use of italics and bold text etc. ahead of the camera-ready submission. Thank you for bringing this to our attention.
**Q1 - Why does RAVL not benefit from reduced dynamics errors according to Table 1?**
Thank you for this question.
Section 3.4 claims and gives evidence that dynamics model errors are already negligible for the main D4RL benchmark (and that issues are instead caused by the edge-of-reach problem).
The trend referred to (namely that reducing dynamics errors does not affect performance) is therefore exactly what we would expect, and the results in Table 1 can hence be seen as further evidence that (contrary to common understanding) dynamics errors contribute negligibly to the issues seen in offline model-based RL.
**Q2 - Results of RAVL's performance with different dynamics model accuracies**
[Please see the rebuttal PDF.]
We have run additional experiments (4 seeds), and have added results for RAVL to the original Figure 1 as requested (please see the new Figure 1 in the rebuttal PDF). As expected, RAVL’s performance remains strong as dynamics model accuracy increases. Furthermore, RAVL appears to be significantly more robust than dynamics uncertainty-based methods as dynamics model accuracy is reduced.
We sincerely thank you for asking for this. These results significantly strengthen our paper and we are excited to add them to the camera-ready submission.
---
We thank the reviewer again for their invaluable feedback. We hope we have addressed all of your questions and concerns. Please let us know if there is anything remaining as we would be more than happy to discuss further. If we have been able to address your concerns, we humbly ask if you would consider raising your score.
---
Rebuttal Comment 1.1:
Comment: Thank you for the response and the additional experiments. I have increased my score.
---
Reply to Comment 1.1.1:
Comment: Thank you for your reply and for rasing your score. We really appreciate your support of our paper. | Summary: The paper investigates a phenomena in model-based offline RL that most existing algorithms fail when they are given the perfect model of the environment. Through multiple experiments, the paper argues the failure is due to the wrong estimated values at the states in the end of model rollouts. These states are called edge of reach and are used for targets of Q values but are never trained themselves. The paper continues to propose a fix in the form of pessimistic target value estimate using ensemble Q networks.
Strengths: - The paper does an admirable job with empirical investigations of the root cause of the observed phenomenon. I find the perfect model experiments, the simple environment example, and the patch oracle fix to be a very convincing set of evidences for the paper's claim. The paper sets a good example for this kind of empirical research.
- The introduced fix is practical and simple, and I believe might improve other existing model-based offline RL algorithms as well
- The empirical results obtained by the
- I find the paper well written and free of typos.
Weaknesses: **1)** The paper could benefit from more investigations on the necessity of the _detection_ of an edge-of-reach state. The current proposed value learning only uses the pessimistic Q target when it decides that the state is an edge-of-reach state.
**Q1)** I wonder whether if this distinction is necessary and if we could always use the pessimistic value and get the similar results.
I believe this could be doable because in states with low variance, the pessimistic value will not defer from the normal one anyway. Potentially, this removes a hyperparameter which sets the threshold for the choice of value target. It also reduces the number of ensemble networks needed for the algorithm.
**2)** The paper could better position itself in the literature. The pessimistic value target is a ubiquitous trick in RL to overcome value estimation. See for example TD3 [1] paper for actor critic methods. The issue where models query Q networks in untrained states has also been pointed out in MBRL research. See for example [2].
**3)** It probably is out of the scope of the paper, but I wonder how much of this issue is shared with online MBRL algorithms. They also usually only create rollouts of a limited length. A discussion on this matter will be welcome.
**4)** It could be a personal matter, but the paper has too many future referencing. For example lines 160, 220, 231, 253, 260, 261. I find these confusing as a reader. Jumping to read the future section while reading is not ideal or even possible. Also when the future section is being read, the connection made in the past sections to the current one is probably forgotten. I recommend reducing these references and only make such connections when necessary. A general comment that more discussions about a topic will appear in the future could be enough for the reader without asking them to jump.
[1] Fujimoto, Scott, Herke Hoof, and David Meger. "Addressing function approximation error in actor-critic methods." International conference on machine learning. PMLR, 2018.
[2] Jafferjee, Taher, et al. "Hallucinating value: A pitfall of dyna-style planning with imperfect environment models." arXiv preprint arXiv:2006.04363 (2020).
Technical Quality: 3
Clarity: 3
Questions for Authors: See Q1, and weakness 3 above.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: I find the limitation discussions sufficient.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to thank the reviewer for their valuable feedback on our submission.
We are particularly encouraged by their appreciation of the care we took with the empirical investigations, in particular with the comments that we provide “a very convincing set of evidences for the paper's claim,” and “set a good example for this kind of empirical research”.
---
**W1/Q1 - “necessity of the *detection* of an edge-of-reach state”**
The suggestion of removing the explicit “detection” step is in fact exactly how RAVL is implemented (see Equation 2). We just chose to describe it as (A) detecting and (B) penalizing edge-of-reach states to provide intuition of what RAVL is effectively doing. Thank you for this question. We will make it clearer in the paper that this two-phase description is just for intuition purposes.
**W2 - Position in literature and additional references [1] and [2]**
Thank you for highlighting this. We have referenced [1] briefly in the related work section already, but we will use the camera-ready extra page allowance to more thoroughly discuss value pessimism in online RL, including the references [1] and [2] you suggested.
**W3 - Effect of edge-of-reach states in *online* MBRL**
Thank you for this question. Yes, we are also interested in extending this work to online MBRL. One key distinction here is the pool of initial rollout states. In *offline* MBRL this is fixed over the whole of training ($s_0 \sim \mathcal{D}_\text{offline}$), while in *online* MBRL this changes as the agent collects additional trajectories online. The fixed initial state distribution is an important aspect of edge-of-reach states (see Definition 1). There may, however, be a somewhat analogous notion of edge-of-reach states in online MBRL. As briefly mentioned in the limitations section, we would be interested to investigate whether this could be used (or perhaps already plays a role) as an implicit exploration bias in online MBRL, and believe this could be an exciting direction for future work.
**W4 - Future referencing**
Thank you for bringing this to our attention. We agree that the current future referencing may be confusing. Your pointer of instead notifying the reader that “more detail *will* appear in the future” rather than asking them to jump, is extremely helpful and we will definitely make this change. We plan to also use the camera-ready extra page allowance to remove future referencing where possible.
---
We thank the reviewer again for their valuble feedback. The pointers (in particular about style of future referencing) are really helpful. We hope we have addressed all of your questions. Please let us know if any further discussion would be useful as we would be more than happy to answer additional questions.
---
Rebuttal Comment 1.1:
Title: Rebuttal Acknowledgement
Comment: I thank the authors for the rebuttal. I have read all the comments and I am maintaining my score.
---
Reply to Comment 1.1.1:
Comment: Thank you for your reply. We are deeply grateful for your continued support of our paper. | Rebuttal 1:
Rebuttal: We would like to thank the reviewers. It is clear they have each taken time and care, and their feedback has greatly helped us improve and refine our work.
We are gratified by the reviewers' overall positive assessment of our work, including comments that the paper “should be accepted,” "will be an important contribution to the community" (Reviewer 8ru8), and the appreciation that “this paper is the first to formally investigate the problem” (Reviewer PtsT).
We are also pleased that the reviewers appreciated RAVL’s “robust performance across benchmarks” (Reviewer GKpt), found the set of experiments to be “a very convincing set of evidences for the paper’s claim,” and found the paper to “set a good example for this kind of empirical research” (Reviewer biRH).
While reviewers commented that the paper was “well-organized” (Reviewer GKpt), and “well written” (Reviewer biRH), the key shared feedback seemed to be that editing the formatting (punctuation, italics, forward referencing, etc.) would make the paper easier to read. We thank the reviewers for this feedback and will be sure to make these changes ahead of the camera-ready submission, especially with the extra page allowance.
**New results showing RAVL’s robustness with changing dynamics accuracy**
[Please see the Rebuttal PDF.]
We are excited to present additional results (as requested by Reviewer PtsT) where extend Figure 1 of the paper to include RAVL. The original version of Figure 1 showed how dynamics uncertainty-based methods fail as dynamics accuracy is increased (in red). Please see Figure 1 of the Rebuttal PDF where we have now added RAVL (in purple).
The new results show that, while existing methods fail, RAVL maintains performance as dynamics accuracy is increased. Additionally, in the other direction, we also find that RAVL is more robust to decreasing dynamics model accuracy.
This observation of RAVL’s much stronger robustness to dynamics model changes fits with our understanding of the edge-of-reach problem, since existing methods address the edge-of-reach problem indirectly via relying on the uncertainty of the dynamics model, while RAVL directly addresses the problem without relying on the dynamics model’s uncertainty. We are excited to share these new results as they further highlight the strength of our approach.
Pdf: /pdf/5ef59d35066a8cbfff74b44108cd69906c840317.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Decoding-Time Language Model Alignment with Multiple Objectives | Accept (poster) | Summary: The paper introduces a decoding-time alignment method called Multi-Objective Decoding (MOD) for aligning language models (LMs) with multiple objectives. MOD combines a set of models aligned for individual rewards and allows any weightings (even not-all-positive) for rewards. MOD leverages a common form among f-divergence regularized alignment approaches to derive an efficient decoding strategy that greedily selects the next token from an algebraic combination of predicted probabilities of all base models. MOD can maximize an interpolated reward function without extensive retraining. The paper theoretically demonstrates the sub-optimality of existing approaches and establishes optimality guarantees for MOD. Empirical results show MOD's effectiveness, with a 12.8% overall reward improvement over a parameter-merging baseline when optimizing for three objectives.
Strengths: * **Flexibility in Objectives Alignment**: MOD's primary strength lies in its ability to align language models with multiple objectives simultaneously. This flexibility allows it to balance and prioritize different user needs and preferences without the need for retraining the model for each new objective combination.
* **Efficiency and Simplicity**: The algorithm is efficient and simple to implement, as it only need to operate the probabilities at decoding time.
* **Theoretical Robustness and Empirical Validation**: MOD is underpinned by a strong theoretical foundation, with proofs that demonstrate its optimality and sub-optimality of existing methods. The paper provides empirical evidence of MOD's effectiveness across various tasks and models, showcasing its practical applicability and robustness in real-world scenarios.
Weaknesses: I don't see any major weaknesses in this paper.
Technical Quality: 3
Clarity: 4
Questions for Authors: * Is the conditions of Eq. 6 correctly?
* Line 231-233: Can you explain more about this?
Confidence: 3
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: None
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1**
*Q: Is the conditions of Eq. 6 correctly?*
A: We omit some details in the main content. The formal conditions of Eq. 6 are provided in **Appendix D.3 Theorem 5**.
Briefly speaking, we require the $f$ to be strong-barrier function and the set of $\{\pi_{i=1,..,M}\}$ to be optimial for each reward (objective) function. The first condition can be always satisfied for all the existing regularization methods.
For the second condition, we assume it is true to explain the main theoretical intuition. Later in **Appendix F**, we further show that our algorithm is robust even when those policies are suboptimal as long as they are not too bad.
**Q2**
*Q: Line 231-233: Can you explain more about this?*
A: Yes. Since the optimization objective of general RLHF methods is $\mathbb E[\mathcal R]-\beta\text{KL}$, it is common to show $\text{KL}(\pi\vert\pi_{\text{ref}})$ in addition to rewards, to indicate that $\pi$ does not deviate much from $\pi_{\text{ref}}$. However, our approach through reformulation can only generate one response for the prompt, not obtaining an exact model. Therefore, it is not accessible to get the KL divergence value. As compensation, we show example generations to indicate that the obtained policy does not deviate much from the refence policy. | Summary: The authors propose a decoding method that aims to combine the predictions of diverse models that are aligned with different objectives. In their multi-objective setting, the goal is to find an optimal policy that maximize a weighted, multi-objective reward, given the policy aligned to each of the individual rewards. In particular, the authors propose a reformulation using Legendre transform to bypass calculating Z (normalization) at a sequence level.
Strengths: 1. The authors provide detailed theoretical analysis to justify their approach.
2. The authors show that the proposed method can handle negative weights for rewards, which cannot be accomplished by previous work.
Weaknesses: 1. The baselines appears weak. For example, in Appendix F, the main comparison is against RS. However, RS cannot even outperform the best individual model in all experiments (Tables 7,8,9,10).
2. The proposed MOD also seems not much stronger. MOD can only beat the best individual model on 2/4 settings in Appendix F.
3. Lack of baselines. It would be helpful if the authors can include more generic ensemble baselines such as weighted averaging/voting.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Has the authors studied the inference overhead of the proposed MOD?
Confidence: 2
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Conclusion and throughout the work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **W1**
*Q: The baselines appears weak. For example, in Appendix F, the main comparison is against RS. However, RS cannot even outperform the best individual model in all experiments (Tables 7,8,9,10).*
A: We have pointed out the sub-optimality of rewarded soup in **Section 6.1**. Although it is a widely used retraining-free approach for multi-objective alignment, it can often lead to suboptimal performance. However, as shown in the original papers [1][2][3] and the experiments in **Section 5**, it is still a strong baseline.
**W2**
*Q: The proposed MOD also seems not much stronger. MOD can only beat the best individual model on 2/4 settings in Appendix F.*
A: We would like to clarify that, the primary goal of **Appendix F**, is to examine the performance of MOD and RS when the base models are suboptimal (early-stopped). And the advantages of MOD over RS show the robustness of our method.
Due to restricted computation resource, it is not easy for us to train a model with prominent performance on HelpSteer dataset (this is why we put it in appendix for reference purpose). And thus, certain base models would drag others down in the logit mixing process. For example, in **Table 3**, $\pi_{1f}$ is obviously trained overall better than $\pi_{2f}$, and $\pi_{3f}$, and thus combining them together would definitely be worse than $\pi_{1f}$ itself, but MOD is not much influenced, compared to RS, demonstrating its robustness.
The performance of MOD with well-trained models is provided in **Section 5**.
**W3**
*Q: Lack of baselines. It would be helpful if the authors can include more generic ensemble baselines such as weighted averaging/voting.*
A: The basic idea of voting is to find out the majority’s voted candidate, and thus requires the possible responses to be fewer than models. Therefore, it is often adopted in multiple choice questions, and not applicable in our tasks.
We have compared MOD with several commonly used weighted averaging approaches, including *average ensemble*, and *perplexity-guided ensemble* (packLLM [4]). Common ensemble methods cannot flexibly generate customized responses, since their weighting selection rules are not related to preferences. Please see the **attached PDF** in global response for experimental results.
**Q1**
*Q: Has the authors studied the inference overhead of the proposed MOD?*
A: Please see **global response**.
[1] Rewarded soups: towards Pareto-optimal alignment by interpolating weights fine-tuned on diverse rewards. arXiv preprint arXiv:2306.04488.
[2] Personalized Soups: Personalized Large Language Model Alignment via Post-hoc Parameter Merging. arXiv preprint arXiv:2310.11564.
[3] Conditioned Language Policy: A General Framework for Steerable Multi-Objective Finetuning. arXiv preprint arXiv:2407.15762.
[4] Pack of LLMs: Model Fusion at Test-Time via Perplexity Optimization. arXiv preprint arXiv:2404.11531. | Summary: In many practical uses of RLHF the reward function is the convex combination of several rewards. Instead of training a single policy attempting to maximize the expected aggregate reward (subject to the usual regularization keeping it close to an anchor policy), the authors show that one can train separate policies, one for each reward and then mix them at decoding time using the same convex combination in log-probability space.
One important consequence is that one can change the weights on various rewards at decoding time, per response, making the algorithm very appealing for situations where the balance between certain rewards needs to change depending on the prompt/context.
Strengths: Novel approach to dealing with rewards that are a linear mix of "elementary" rewards; mathematically sound.
Offers a simple, practical way of changing the mix of "elementary" rewards at decoding time, per model response.
Weaknesses: The presentation could be much simpler, starting from the ubiquitous case of using KL divergence for regularization, which also leads to the elegant log-linear combination in Eq. (7). The general case for f-divergence could be mentioned, but relegated to the already prodigious appendix.
One technical weakness of the proposed approach is that one needs to serve/run M different policies at decoding time, which is significant overhead.
After completing the review I have become aware of the work in:
@misc{wang2024conditionedlanguagepolicygeneral,
title={Conditioned Language Policy: A General Framework for Steerable Multi-Objective Finetuning},
author={Kaiwen Wang and Rahul Kidambi and Ryan Sullivan and Alekh Agarwal and Christoph Dann and Andrea Michi and Marco Gelmi and Yunxuan Li and Raghav Gupta and Avinava Dubey and Alexandre Ramé and Johan Ferret and Geoffrey Cideron and Le Hou and Hongkun Yu and Amr Ahmed and Aranyak Mehta and Léonard Hussenot and Olivier Bachem and Edouard Leurent},
year={2024},
eprint={2407.15762},
archivePrefix={arXiv},
primaryClass={cs.LG},
url={https://arxiv.org/abs/2407.15762},
}
Section 6 and Appendix E in [wang2024conditionedlanguagepolicygeneral] are directly relevant to this paper, deriving a sensitivity analysis for logit mixing, the log-linear combination in Eq. (7).
Technical Quality: 3
Clarity: 2
Questions for Authors: The reformulation using Legendre transformation in Section 4.2 was hard to follow.
Again, a crisp derivation for the case of KL regularization that anyone can follow would greatly improve the reach, and implicitly impact of the paper.
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 4
Limitations: One technical weakness of the proposed approach is that one needs to serve/run M different policies at decoding time, which is significant overhead.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **W1**
*Q: The presentation could be much simpler, starting from the ubiquitous case of using KL divergence for regularization, which also leads to the elegant log-linear combination in Eq. (7). The general case for f-divergence could be mentioned, but relegated to the already prodigious appendix.*
A: Thank you for your suggestions! We would consider focusing on the reverse KL case in main content in later version.
**W2**
*Q: One technical weakness of the proposed approach is that one needs to serve/run M different policies at decoding time, which is significant overhead.*
A: Please see **global response**.
**W3**
*Q: After completing the review I have become aware of the work in [1].*
*Section 6 and Appendix E in [1] are directly relevant to this paper, deriving a sensitivity analysis for logit mixing, the log-linear combination in Eq. (7).*
A: Yes, this paper is a great work and very relevant to our work. It is released on July 2024, while the submission date of NeurIPS is May 2024. Thus it is concurrent.
Their sensitivity analysis on logit mixing is basically of the same idea as our **Section 6.3**. The main difference in implementation is that, they use parameter-merging as an approximation of logit mixing. Notably, we have pointed out the sub-optimality of parameter-merging in **Section 6.1**, but in [1], they begin to merge models during training, and thus alleviates this issue thanks to a good coverage of training data. Anyway, we are focusing on decoding-only generation tasks, so their solution involving training is not applicable to ours.
**Q1**
*Q: The reformulation using Legendre transformation in Section 4.2 was hard to follow. Again, a crisp derivation for the case of KL regularization that anyone can follow would greatly improve the reach, and implicitly impact of the paper.*
A: More details of reformulation are provided in **Section D.3**. The key ideas are:
1) We don’t need to get an exact model to sample responses. Decoding one response for one prompt is enough.
2) The response selection can be viewed as a constrained optimization problem: to maximize reward, with a not too low probability for reference policy (or equivalently, to maximize probability for reference policy, with a not too low reward).
3) The reward can be represented using a mapping from predicted probabilities of base policies.
4) Finally we can get a closed form solution for the dual problem.
And yes, we would consider showing a simple derivation of reformulation for the KL case in later version.
**Limitations**
*Q: One technical weakness of the proposed approach is that one needs to serve/run M different policies at decoding time, which is significant overhead.*
A: Please see **global response**.
[1] Conditioned Language Policy: A General Framework for Steerable Multi-Objective Finetuning. arXiv preprint arXiv:2407.15762. | Summary: This paper presents Multi-Objective Decoding (MOD), a novel algorithm designed to align language models (LMs) with multiple human preferences simultaneously during decoding. MOD addresses the limitations of existing methods that optimize LMs for a single reward function, thereby providing flexibility and efficiency without the need for retraining. The authors define multi-objective reward functions and assume the existence of single-objective aligned LMs optimized for specific rewards. By leveraging the properties of strong-barrier functions and using the Legendre transform, they derive a closed-form solution for linearly combining the outputs of different models, achieving multi-objective alignment. This method guarantees optimality under certain conditions and transforms response-level decoding into efficient token-level decoding using greedy search. Extensive experiments validate MOD's effectiveness, demonstrating significant improvements in reward optimization compared to parameter-merging baselines.
Strengths: MOD introduces a novel method for multi-objective alignment, enabling language models to align with multiple objectives simultaneously during decoding, thus eliminating the need for retraining.
The authors provide a robust theoretical framework by defining multi-objective reward functions and leveraging strong-barrier functions. They prove a closed-form bijection between single-objective models and their rewards, and derive a closed-form solution using the Legendre transform.
MOD achieves optimality guarantees under certain conditions and transforms response-level decoding into efficient token-level decoding using greedy search, making the method both effective and practical.
Extensive experiments demonstrate MOD's superior performance, showing a 12.8% overall reward improvement compared to parameter-merging baselines when optimizing for three objectives. The effectiveness is validated across various tasks and model sizes.
Weaknesses: Although MOD circumvents the need for retraining, it requires loading multiple models concurrently, which can be computationally intensive and may not scale efficiently for a larger number of objectives or bigger model sizes.
The paper could benefit from a more detailed discussion on potential negative impacts or failure modes, especially in scenarios involving conflicting objectives or suboptimal base model alignment.
Technical Quality: 3
Clarity: 3
Questions for Authors: How does the MOD approach scale with an increasing number of objectives? Are there any practical limits to the number of objectives that can be managed simultaneously?
Can the authors provide more insights into the sensitivity of MOD to the quality of base models? Specifically, how does the performance degrade if the base models are not well-aligned or are suboptimal?
Are there any guidelines or best practices for setting and adjusting these preference weightings to achieve optimal results?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors have addressed the limitations in Section 7.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **W1**
*Q: Although MOD circumvents the need for retraining, it requires loading multiple models concurrently, which can be computationally intensive and may not scale efficiently for a larger number of objectives or bigger model sizes.*
A: Please see **global response**.
**W2**
*Q: The paper could benefit from a more detailed discussion on potential negative impacts or failure modes, especially in scenarios involving conflicting objectives or suboptimal base model alignment.*
A:
- Yes, we have provided sensitivity analysis (scenarios involving suboptimal base models) in **Section 6.3**.
- As for conflicting objectives, we have shown in **Section 4.2** and **Appendix D.3**, that MOD can generate a response with the interpolated reward $r\ge C_2$ (where $C_2$ is a threshold based on $\pi_{\text{ref}}$ and the reward distribution), and this is not affected by the relationship between these objectives. For example, let us take $\mathcal R_1$:Helpful and $\mathcal R_2$:Harmless. Then a potential issue might be, there may not exist a response that is both very helpful and very safe. But our approach can guarantee that we obtain a response with $w_1\mathcal R_1+w_2\mathcal R_2$ larger than that threshold. We will include this explanation in a later version.
**Q1**
*Q: How does the MOD approach scale with an increasing number of objectives? Are there any practical limits to the number of objectives that can be managed simultaneously?*
A: Theoretically, there is no limits in the number of objectives, while the time/space cost will increase. Please see **global response** for details of how to alleviate this issue. In practice, the number of objectives cannot be too large [1][2][3], and is usually restricted to fewer than 5. Therefore, the additional inference cost is still acceptable for the provider of LLMs, who is capable of supporting large-scale MoE systems [4][5].
**Q2**
*Q: Can the authors provide more insights into the sensitivity of MOD to the quality of base models? Specifically, how does the performance degrade if the base models are not well-aligned or are suboptimal?*
A: Please see W2.
**Q3**
*Q: Are there any guidelines or best practices for setting and adjusting these preference weightings to achieve optimal results?*
A: There can be two definitions for optimal results based on two different settings.
1) Customized response generation based on user preference. In this setting, there is no gaurantee that we can get improvement on every objective. Instead, we care about finding a model that matches the user preference. In **Section D.3**, we have gaurantees that, weightings $w_1,w_2,\ldots,w_m$ during policy decoding time are the best combination for reward function $\sum_{i=1}^m w_i\mathcal R_i$. In the other word, as long as the weighting chosen by the user matches their true performance, we can gaurantee a model that matches the users perference. And intuitively, if all the model are trained under the *same reward scales*, then the weights chosen by users should righlty reflect their preferences on these objectives. As demonstrated in **Section 5.2 Figure 2,3,4**, we believe this "same scale" assumption is effective in practical.
2) Faster model combination for general performance improvements. In this setting, we aim to find a generally better model via adjust the weights in decoding time only. As shown in **Section 5.2 open-instruction following**, there may not be a fixed optimal solution, but we can *quickly* experiment with some weightings, and then test on certain benchmarks. Since our method is greedy and decoding-time-only, this pipeline is very efficient.
[1] UltraFeedback: Boosting Language Models with Scaled AI Feedback. arXiv preprint arXiv:2310.01377.
[2] HelpSteer: Multi-attribute Helpfulness Dataset for SteerLM. arXiv preprint arXiv:2311.09528.
[3] Beyond One-Preference-Fits-All Alignment: Multi-Objective Direct Preference Optimization. arXiv preprint arXiv:2310.03708.
[4] Language Models are Few-Shot Learners. arXiv preprint arXiv:2005.14165.
[5] A Closer Look into Mixture-of-Experts in Large Language Models. arXiv preprint arXiv:2406.18219. | Rebuttal 1:
Rebuttal: We appreciate all the reviewers for their timely and positive feedbacks. Reviewer dVwo describes our approach, MOD, as “novel, effective and practical”, and notes that “the theoretical framework is robust”; Reviewer pv2M thinks MOD is “novel, simple, practical, and mathematically sound”; Reviewer BvFb acknowledges our contributions to “detailed theoretical analysis” and praises MOD’s excellence in handling negative weights; Reviewer TTLg highlights the “flexibility, efficiency and simplicity” of MOD, acknowledging the “theoretical robustness and empirical soundness”.
Reviewer TTLg thinks there is no major weakness in this paper. Other three reviewers (dVwo, pv2M, BvFb) raise concerns about the inference overhead of MOD as the increase of the number of objectives. We would like to clarify this point in detail: indeed, there is a trade-off between performance and time/space cost, similar to the deployment of MoE systems [1][2]. And there are two ways alleviating this issue (visualizations are provided in the **attached PDF**):
1) Using light-weight adapters. Let $m$ be the number of objectives. Then by training one adapter for one objective, we can implement MOD with $\mathcal O(1)$ space and $\mathcal O(m)$ time cost in inference time, compared to one model. For example, a recent manuscript [3] adopts the same idea in reward model training.
2) Distributed deployment. Note that we can first compute the logits of each model at the same time, which is parallelable, and then do the mixing, thus MOD can be implemented with $\mathcal O(m)$ space and $\mathcal O(1)$ time cost in inference time. Furthermore, we expect that this can be further accelerated with the development of large-scale ML-systems like [4][5].
We are happy to address any further comments from reviewers.
[1] Language Models are Few-Shot Learners. arXiv preprint arXiv:2005.14165.
[2] A Closer Look into Mixture-of-Experts in Large Language Models. arXiv preprint arXiv:2406.18219.
[3] Interpretable Preferences via Multi-Objective Reward Modeling and Mixture-of-Experts. arXiv preprint arXiv:2406.12845.
[4] DeepSpeed4Science Initiative: Enabling Large-Scale Scientific Discovery through Sophisticated AI System Technologies. arXiv preprint arXiv:2310.04610.
[5] Efficient Memory Management for Large Language Model Serving with PagedAttention. arXiv preprint arXiv:2309.06180.
Pdf: /pdf/60245b536f00eb815b460c048d74e6de23001271.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Learning-Augmented Algorithms for the Bahncard Problem | Accept (poster) | Summary: This work studies the Bahncard Problem and proposes a new learning-augmented algorithm, PFSUM. The writing of the paper is clear and easy to follow. The work also provides detailed mathematical proofs for six patterns followed by the experiments validate the theoretical findings.
Strengths: This paper presents several interesting elements, particularly in its approach to a more generalized version of the Bahncard Problem. The consistency of the proposed algorithm, with its bounded promises, is a noteworthy feature. Unlike traditional approaches, this algorithm does not require predicting the entire future input, which simplifies the decision-making process. Additionally, the paper provides a detailed analysis of the cost-effectiveness across six different travel patterns, offering valuable insights into the performance of the algorithm.
Weaknesses: The contributions of this paper are relatively narrow and ambiguous. One significant issue is that the robustness of the PFSUM algorithm remains unbounded when β converges to 0. This limitation undermines the reliability of the algorithm in scenarios where β is very small. Additionally, the parameter T is critical as it defines the boundaries of both the time interval and the prediction interval. However, the paper does not provide a clear analysis of how variations in T impact the competitive ratio, leaving an essential aspect of the algorithm's performance unexplored.
The related work section fails to engage with recent literature that could provide context for this study's contributions. While [12] is a milestone work from 2020, recent publications exploring different facets of the same problem are omitted. These could include the works like "Online algorithms with costly predictions (2023)", "Learning-Augmented Online Algorithm for Two-Level Ski-Rental Problem (2024)" and others. Including these works would offer a more comprehensive view of the field's current state. The authors could also indicate the principle and mechanism differences, rather than contextual disparities, between their work and [25].
The experimental validation is insufficient, relying on only two benchmarks, which inadequately demonstrates the generalizability and robustness of the proposed algorithm. While the paper claims to measure online algorithm performance using competitive ratios, the experiments primarily employ cost ratios. This discrepancy between theoretical claims and experimental measures undermines the impact of the findings and may lead to misinterpretation of the PFSUM's actual performance improvements.
Technical Quality: 2
Clarity: 3
Questions for Authors: 1. What are the specific contributions of your work compared to other studies on the same problem, including those not cited in this paper?
2. Have you explored the effects of varying the length of the prediction interval on the performance of PFSUM? Could you provide insights into the trade-off between consistency and robustness in your algorithm?
3. Could you explain why your experimental validation relies on only two benchmarks? What are the reasons behind this choice, and do you believe that including more performance benchmarks could address potential limitations in demonstrating the algorithm's effectiveness?
Confidence: 5
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: There is no negative societal impact of this work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ### **Weakness 1 & Question 1**
Our specific contributions are to (1) develop an effective learning-augmented algorithm PFSUM for the Bahncard problem **using predictions on an immediate future only, which is more practical since the predictor is much easier to train**; (2) analyze the competitive ratio of PFSUM **as a function of prediction error** by a divide-and-conquer approach and a holistic analysis of card purchasing patterns and their concatenations; (3) experimentally evaluate PFSUM and compare it with state-of-the-art algorithms in the literature. Moreover, we would like to point out that we focus on a **continuous** time setting of the Bahncard problem where requests can arise and cards can be purchased at any time instant along the time dimension. This is motivated by applications such as cloud instance reservation (in clouds, reserved VM instances can be purchased at any time instant). Our setting is more general than the slotted time setting (where requests are made only at the beginning of discrete time slots and so do card purchases) studied by prior work including reference [12] and *Online algorithms with costly predictions (2023)* you mentioned. **The unbounded robustness $1/\beta$ of PFSUM when $\beta$ goes to 0 arises from the continuous time setting.** The worst-case example is that *there are an infinite number of small-cost travel requests made continuously over time* and *predictions always forecast low total cost in the future interval* (so that the total ticket price is infinite while PFSUM does not purchase cards due to bad predictions). This example does not apply to the slotted time setting because the number of requests in an interval is limited (if the total ticket price goes to infinity, the price of individual requests also goes to infinity, and for any request with price exceeding $\gamma$ at time $t$, PFSUM purchases a card at $t$ by definition in lines 175-176). **In the continuous time setting, $T$ does not affect the competitive ratio as changing $T$ just implies scaling time up or down.** This is akin to the optimal conventional Bahncard algorithm SUM having a competitive ratio $2-\beta$ independent of $T$.
### **Weakness 2**
Thank you for pointing us to these papers. We will include them in the related work. *Learning-Augmented Online Algorithm for Two-Level Ski-Rental Problem (2024)* studies an extended version of ski-rental problem, which extends ski-rental **along the item dimension** (from one item to multiple items and introduces a new option of combo purchase), whereas the Bahncard problem extends ski-rental **along the time dimension** (making decisions repeatedly). They have different problem structures. *Online algorithms with costly predictions (2023)* focuses on how many predictions are required to gather enough information to output a near-optimal buying schedule, **assuming all predictions are correct**. The learning-augmented algorithm proposed **takes a suggested sequence of buying times as input** and **assumes a slotted time setting**, thereby sharing similar drawbacks to reference [12]. The paper studies only the consistency and robustness of the proposed algorithm, and **does not** derive the competitive ratio as a function of prediction error. Without the latter, one could not judge how tolerant the algorithm is to errors. If there is no ratio function of the error, the ratio of the algorithm will jump directly from consistent to robust if the prediction is slightly wrong (which usually happens in practice). Moreover, there was no experimental evaluation.
### **Weakness 3 & Question 3**
Thank you for your comments and questions. In the experiments, cost ratio refers to the ratio between the cost produced by an online algorithm and the offline optimal cost **for a given input** (i.e., travel requests). In the theoretical analysis, competitive ratio refers to the **worst-case** cost ratio across all possible inputs. So, they are consistent. As with most papers in this field, **the experiments tend to show better performance than the theoretically analyzed bounds due to the use of particular datasets**.
Robustness is an upper bound of the competitive ratio across all possible prediction errors, which means the worst-case scenario typically arising from extreme inputs. Our experiments generate travel requests from distributions that resemble real-world scenarios, which are unlikely to encounter such extreme situations. Hence, it is not surprising that the empirical results do not demonstrate the theoretical bound of robustness. **The main purpose of the experiments is to compare the empirical cost ratios of different algorithms with realistic travel request patterns, rather than demonstrating theoretical worst-case bounds.**
For experimental benchmarks, we have tried to find **all** papers related to the Bahncard problem. The **only** article that conducts experiments for the Bahncard problem is reference [28]. We therefore **refer to their experimental setup** of traveler profiles and include various types of ticket price distributions in our paper. Reference [12] did not conduct experiments for the Bahncard problem. We implement its algorithm (PDLA), and include it in our comparison. Thus, our experiments provide comprehensive empirical evaluation and comparison.
### **Question 2**
We appreciate your insight. Section 4.1 of our paper studies a prediction interval shorter than $T$ and shows that it is not good. If the length of the prediction interval is $T + \epsilon$ (for any $\epsilon > 0$), it means at time $t$, we know the predicted total regular cost in $[t, t + T + \epsilon)$. But it may happen that all the cost in $[t, t + T + \epsilon)$ is incurred before $t+ T$ (i.e., in $[t, t + T)$) or all the cost is incurred beyond $t+T$ (i.e., in $[t + T, t+T+\epsilon)$). **Thus, it is not helpful to make the correct purchase decision at time $t$, because a Bahncard purchased at $t$ can reduce the cost in $[t, t + T)$ only.**
---
Rebuttal 2:
Title: My Concerns Remain
Comment: Thank you for the detailed responses from the authors. I thought through your work and your responses. Although I like your work and enjoy reading the text, a couple of unsolved issues hesitate me to improve my score.
1) Since your algorithm predicts the near future, the advice complexity could be much larger than those algorithms predicting a long-term future. It will be interesting to explore the cumulative average competitive ratio versus the competitive ratio of a long-term prediction to prove whether the proposed online algorithm is more practical.
2) Relating the prediction errors to the competitive ratio is interesting but is not new, which can date back to the paper “Improving Online Algorithms via ML Predictions” (2018). The better bounded competitive ratio seems to be more attractive and convincing. Furthermore, if we let β = 0, we can observe that the consistency of PFSUM converges to at least 2. As is known, in the ski rental problem, the worst-case competitive ratio is 2. It means when β = 0, the consistency of PFSUM is worse than the robustness of other online algorithms.
3) If T is related to the prediction errors, will it affect the competitive ratio?
4) As mentioned, you only found one work that conducted experiments on the Bahncard problem, you can still employ other online algorithms on the Bahncard problem, as you do with your reference [12]. A single benchmark cannot show your proposed contributions, particularly in ML/AI conferences. Those works mostly conducted extensive experiments on numerous datasets compared across multiple benchmarks. With the current shape of the paper and your arguments, I just feel this problem as the Achilles’s heel of the work. If the theoretical and experimental results of the Bahncard problem are still limited, you can put your algorithm into a ski rental problem with β = 0 and T converging to infinity. Then, you can have more benchmarks even they may not be the perfect one. This issue makes me wonder whether this venue is the right place for this work. In my opinion, the theoretical computing conferences, e.g., STOC, FOCS, SODA, may be a better fit for this work as they pay more attention on theoretical contributions and will have more audiences to read and cite the work.
---
Rebuttal Comment 2.1:
Title: (1/2) Official Comment by Authors
Comment: Thank you for your feedback. Below are our responses to your concerns.
### **Question 1**
PFSUM predicts the near future only when it encounters a regular travel request, regardless of how long the time horizon is. However, when the time horizon is sufficiently long, time series predictions (those predicting a long-term future) are **intractable** in the real world (i.e., their prediction complexity is $\infty$). Therefore, it is **inaccurate** to assert that our prediction complexity is much larger.
For practical verification, our experimental evaluation actually includes the type of effectiveness comparison you mentioned, specifically comparing our proposed algorithm, PFSUM, which predicts the near future, with the PDLA algorithm from reference [12], which predicts the long-term future. The results show that **PFSUM consistently and significantly outperforms PDLA**.
Moreover, a once-for-all long-term prediction made at the very beginning struggles to adapt to travel requests whose patterns change dynamically, making it impractical. In contrast, our short-term prediction can overcome this limitation and adapt to such dynamics online.
### **Question 2**
The consistency of PFSUM is **better than** the robustness of other online algorithms even when $\beta = 0$. If $\beta = 0$, the consistency of PFSUM is $\frac{2}{1+\beta} = 2$. For ski-rental, the deterministic algorithm proposed in *Improving Online Algorithms via ML Predictions (NeurIPS '18)* has a robustness of $1 + \frac{1}{\lambda}$ (where $0 < \lambda < 1$ is a hyperparameter), which is greater than 2. For Bahncard, the algorithm proposed in *Online Algorithms with Costly Predictions (2023)* also has a robustness of $1 + \frac{1}{\lambda}$ (where $0 < \lambda < 1$), which is greater than 2.
### **Question 3**
We have derived the competitive ratio as a function of the prediction error. If the prediction error is related to $T$, then the effect of $T$ is implicitly included in our result. In the Bahncard problem, $T$ is a given parameter, not something created in any proposed algorithm that can be tuned. Studying the relation between the prediction error and $T$, if any, is orthogonal to our work.
---
Rebuttal Comment 2.2:
Title: (2/2) Official Comment by Authors
Comment: ### **Question 4**
Reference [12] (published in NeurIPS '20) studied several problems, including the Bahncard problem, but experimented only with the TCP acknowledgment problem using three distributions of packet arrivals. *Online Algorithms with Costly Predictions (2023)* was published in AISTATS, but it did not conduct any experimental evaluation. Thus, we disagree that our experimental evaluation is below the expectations of ML/AI conferences. **In fact, to our knowledge, we are the first to conduct extensive experiments for learning-augmented Bahncard algorithms.** Our experimental evaluation covers two types of traveler profiles, each with three types of ticket price distributions, totaling six benchmarks.
Furthermore, it doesn't make sense to evaluate our algorithm with datasets (or benchmarks) for ski rental, since in the Bahncard problem, buying decisions need to be made repeatedly over time, which is the most prominent difference from ski rental and this difference guides the algorithm design. Based on your suggestions, however, we have employed another algorithm for comparison. We adapted Algorithm 2 proposed for ski rental in *Improving Online Algorithms via ML Predictions (NeurIPS '18)* to the Bahncard problem and named it SRL.
Let $\lambda \in (0, 1)$ be a hyperparameter, SRL purchases a Bahncard at a regular travel request $(t,p)$ if and only if there exists a time instant $t' \in [t - T, t]$ that satisfies one of the following conditions:
- The predicted $T$-future-cost at time $t'$ is $\geq \gamma$, and the sum of the costs over the time interval $[t', t]$ is greater than $\lambda \gamma$.
- The predicted $T$-future-cost at time $t'$ is $< \gamma$, and the sum of the costs over $[t', t]$ is $> \frac{\gamma}{\lambda}$.
We found that PFSUM also consistently outperforms SRL in the experiments, as shown below.
- The following table presents the additional results for *Commuters* when $\beta = 0.8$, $T = 10$, $C = 100$, and the price distribution is Pareto.
| average cost ratio/perturbing probability | 0.0 | 0.1 | 0.2 | 0.3 | 0.4 | 0.5 | 0.6 | 0.7 | 0.8 | 0.9 | 1.0 |
|---|---|---|---|---|---|---|---|---|---|---|---|
| SRL ($\lambda = 1$) | 1.027 | 1.027 | 1.027 | 1.027 | 1.027 | 1.027 | 1.027 | 1.027 | 1.027 | 1.027 | 1.027 |
| SRL ($\lambda = 0.6$) | 1.025 | 1.036 | 1.042 | 1.045 | 1.046 | 1.047 | 1.047 | 1.048 | 1.048 | 1.048 | 1.048 |
| SRL ($\lambda = 0.4$) | 1.026 | 1.043 | 1.052 | 1.056 | 1.058 | 1.059 | 1.060 | 1.060 | 1.060 | 1.060 | 1.060 |
| PFSUM | **1.014** | **1.021** | **1.024** | **1.025** | **1.027** | **1.027** | **1.027** | **1.027** | **1.027** | **1.027** | **1.027** |
- The following table presents the additional results for *Occasional Travelers* when $\beta = 0.8$, $T = 10$, $C = 100$, and the price distribution is Pareto.
| average cost ratio/perturbing probability | 0.0 | 0.1 | 0.2 | 0.3 | 0.4 | 0.5 | 0.6 | 0.7 | 0.8 | 0.9 | 1.0 |
|---|---|---|---|---|---|---|---|---|---|---|---|
| SRL ($\lambda = 1$)| 1.023 | 1.023 | 1.023 | 1.023 | 1.023 | 1.023 | 1.023 | 1.023 | 1.023 | 1.023 | 1.023 |
| SRL ($\lambda = 0.6$)| 1.015 | 1.04 | 1.046 | 1.052 | 1.054 | 1.055 | 1.055 | 1.055 | 1.055 | 1.055 | 1.055 |
| SRL ($\lambda = 0.4$) | 1.013 | 1.07 | 1.088 | 1.104 | 1.110 | 1.114 | 1.115 | 1.115 | 1.115 | 1.115 | 1.115 |
| PFSUM | **1.010** | **1.017** | **1.022** | **1.022** | **1.010** | **1.017** | **1.022** | **1.022** | **1.010** | **1.017** | **1.022** |
### Concluding Remarks
We are not claiming that our work is perfect. But our work does address learning-augmented Bahncard from perspectives different than existing work (including reference [12] and *Online algorithms with costly predictions (2023)*). We also conduct a comprehensive experimental evaluation to compare with existing algorithms. We believe that our work is valuable to the community and can further stimulate new ideas & studies of the learning-augmented Bahncard problem. | Summary: In this paper, the authors investigate the Bahncard problem in the learning-augmented context. The Bahncard problem is a generalization of ski rental, originating from the railway pass of the German railway company of the same name, where an algorithm must choose between a cheap short-term solution (purchaing railway fare at full price) or an expensive long-term solution (purchasing a Bahncard to receive a discount on all future fares for a set period). Previous works have studied the Bahncard problem extensively in the non-learning-augmented setting, and in the learning-augmented setting by formulating the Bahncard problem as a linear program and applying the primal-dual framework for approximately solving LPs. The authors contribute by proposing an algorithm using the problem structure of the Bahncard problem, independent of the primal-dual framework, that can handle fractional and non-uniform inputs, and utilizes short-term predictions as opposed to an advice on the entire sequence of travels.
Formally, the authors consider problem instances where a series of travel requests arrive at time $t_i$ with ticket price $p_i$. An algorithm can either choose to buy the ticket in full price, or purchase a Bahncard with price $C$, that is valid for a time period of length $T$, and discounts all purchase while valid by a multiplicative factor of $0 \leq \beta \leq 1$. The author's proposed algorithm, PFSUM, is $2/(1+\beta)$-consistent (the competitive ratio when the prediction is accurate) and $1/\beta$-robust (the competitive ratio upper bound with arbitrarily inaccurate prediction).
PFSUM is a very intuitively simple algorithm, that builds upon previous ideas. On a high level, the algorithm examines both the requests that arrived in the past $T$ time period, as well as the predicted requests that will arrive in the next $T$ time period. If the total cost of both the past and the future time period is larger than a certain parameter, the algorithm purchases a Bahncard. The proof strategy divides the algorithm's execution into phases, and bounds the ratio between PFSUM's solution cost and the optimal solution cost for each phase, depending on how the Bahncard purchases overlap between the two solutions.
Strengths: Overall the paper is very well-written and addresses a theoretically and practically interesting problem in the context of learning-augmented algorithms, improving on previous solutions. The results yield significant improvements over prior works on the same problem, by specializing and utilizing the problem structure, and the model (advice, performance) chosen by the author are natural and reasonably practical. The algorithms and high-level ideas presented are simple and intuitive, and while the detailed proofs are quite involved, they are presented in a fashion that is easy to follow and understand, which I believe is an important characteristic of a good paper, especially for a widely attended conference such as NeurIPS.
In particular, I appreciate the author's explanation of various initial attempts based on prior works in Section 4: I can easily follow the development of their algorithm from simple preliminary ideas inspired by existing literature, and understand why these naive algorithms does not provide satisfactory bounds, and what led the authors to make improvements and modifications to derive the eventual PFSUM algorithm. Apart from this, the theorems and technical claims are also introduced in sufficient context for me to understand what role does each claim play in the overall structure of proofs and logic. The various figures used by the authors to illustrate definitions and ideas are also much appreciated.
Weaknesses: I do not have much major complaints about the paper. One particular issue is that I am not certain how the conclusion of Proposition 4.6 on page 6 is trivial: while I agree the numerator is upper bounded by Lemma 4.3, I do not see an obvious lower bound on the cost of the optimal solution in the denominator, for which Lemma 4.3 only implies an upper bound. A quick delve into the appendices seems to provide traces of arguments supporting $OPT(*) \geq (1+\beta) \gamma + \beta \eta$, so I am inclined to trust the soundness of the author's claims, but I believe that this is important enough an argument to be made explicit in the main corpus.
I am recommending a weak accept as an initial score, but can be convinced by clarifications from the authors to raise the score to accept.
Technical Quality: 3
Clarity: 4
Questions for Authors: As stated above, I would like to understand the logic behind Proposition 4.6 in more detail, which I believe is important yet non-trivial enough to be explicitly shown in the main corpus.
A few minor questions/suggestions/comments:
- Line 169-172, Section 4.3: The argument behind SUM's success might be more suitable in an earlier section, to help the readers understand both Section 4.1 and 4.2 better.
- Line 188: The definition of the prediction error only concerns the sum of the cost of requests. Is it possible to obtain better bounds and results using a more "fine-grained" error function? This might be an interesting future direction if feasible.
- Line 203, Lemma 4.4: The statement and the figure together slightly confuses me - what if the time interval $[t, t+T)$ does not intersect on phases on both sides? Intuitively the same result should hold with $s_2 = s_4 = 0$, but it is not made explicit by either the lemma statement or the figure.
- Line 221-222, text of Figure 2: "...$x$ can be any non-negative **integers**." It should be **integer**?
- Line 252, Proposition 4.8: The definitions seem to suggest that there should be no regular requests during an on phase, so what is the logic behind the proposition statement? Are the regular cost and the on phase referencing different algorithms?
Confidence: 4
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: The authors discuss their limitations properly in the introduction and the conclusion sections. No ethical concerns are applicable.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer AGsA, we deeply appreciate your acknowledgment and support of our work! Below, we respond to the weakness and questions 1-5, one by one.
### **Weakness**
We greatly appreciate your constructive comments. In particular, we greatly appreciate that you understood our paper in depth. We will rewrite the proof of Proposition 4.6 and hope the new version makes this proposition clearer for readers. The new proof is presented below.
*Proof.* Let $x = c(\sigma; [\tau_i, \tau_i + T))$. Based on the definition of Pattern II, $\textsf{OPT}(\sigma; [\tau_i, \tau_i + T)) = C + \beta x$ (OPT buys a card at $\tau_i$) and $\textsf{PFSUM}(\sigma; [\tau_i, \tau_i + T)) = x$ (by definition, PFSUM does not buy cards during any off phase). Hence, the cost ratio is $\frac{x}{C + \beta x}$, which increases with $x$ since $\beta < 1$. By Lemma 4.3, $x < 2\gamma + \eta$. Hence, the cost ratio is bounded by $\frac{2\gamma + \eta}{C + \beta (2\gamma + \eta)} = \frac{2\gamma + \eta}{(1-\beta)\gamma + \beta (2\gamma + \eta)} = \frac{2\gamma + \eta}{(1+\beta)\gamma + \beta \eta}$.
### **Question 1**
Thank you for your extensive review. We will follow your suggestion to add more description to Section 4.1 to help readers to understand.
### **Question 2**
We agree with you. Making more fine-grained predictions for future requests and defining the error function accordingly might help to improve the bounds, which is indeed an interesting direction for future research. Thanks for your valuable suggestion.
### **Question 3**
Your insight is reasonable. Actually, Lemma 4.3 describes the case you mentioned, in which the time interval $[t, t+T)$ is fully contained in an off phase, and does not intersect on phases on either sides. Therefore, we omit this case in the figure associated with Lemma 4.4.
### **Question 4**
Thanks for pointing out this. You are right, we made a typo here. $x$ should be an integer.
### **Question 5**
Thanks for alerting us to this ambiguous description. Firstly, yes, there will be no regular requests during an on phase. An on phase refers to a period during which PFSUM has a valid Bahncard. On the other hand, *the total regular cost* in an on phase refers to **the sum of the original ticket prices** before discounts in this phase (please refer to line 108). Therefore, the regular cost of a given interval is fixed and independent of the algorithm.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response. Yes, please include the proof (sketch) for Proposition 4.6 in the camera-ready version, if accepted. Personally, I am quite excited to see if there is potential in a more 'fine-grained' advice format, but obviously it is out of scope for this submission.
With Proposition 4.6 clarified I am raising my rating from 6 to 7.
---
Reply to Comment 1.1.1:
Title: Official Comment by Authors
Comment: We are very pleased to know that your concerns have been resolved. We also greatly appreciate your high regard for our work's contribution. We will include these discussions in the final paper and try our best to proofread all the details. Once again, we thank you for your constructive suggestions! | Summary: In this paper, the authors provide a learning-augmented approach for solving the Bahncard problem, which is a generalization of the ski-rental problem. The authors provide theoretical guarantees for the consistency and robustness of the proposed algorithm PFSUM, which measures the performance of the proposed method compared to an optimal offline method for different levels of prediction error in learning augmentation. The paper also provides empirical results of the proposed method and a comparison with existing methods, which validate the proposed method.
Strengths: * The paper proposes learning augmentation for the Bahncard problem with theoretical guarantees, which seems novel in the related literature.
* The proposed method is introduced in a methodical way which helps understand the motivation and intuition behind the method.
* The paper provides convincing theoretical and empirical results that validate the proposed method.
Weaknesses: * There seems to be a gap between the theoretical result for robustness and the empirical results in the paper. Specifically, when $\beta$ is very small (i.e. when the discount is larger), the bound for competitive ratio tends to be very large. However, the empirical results show that the cost ratio is close to 1 ( as shown in Figure 29, for example). Is there an intuitive reason for this gap?
Technical Quality: 3
Clarity: 3
Questions for Authors: Please see the Weaknesses.
Confidence: 2
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer m14s, we are immensely grateful for your positive comments! Below is our response to the weakness.
### **Weakness**
We greatly appreciate your keen observation between the experimental results and the theoretical robustness bound $1 / \beta$. Allow us to start by revisiting the following three definitions, although you might already fully understand them.
* **Cost ratio:** the ratio between the cost produced by an online algorithm and the cost of the offline optimum for a given input (i.e., travel requests).
* **Competitive ratio:** the **worst-case** cost ratio across **all possible** inputs, comparing an online algorithm with the offline optimum.
* **Robustness:** an upper bound of the competitive ratio across **all possible** prediction errors.
**Therefore, the theoretical bound $1 / \beta$ describes the algorithm’s performance in the worst-case scenario, typically arising from very extreme inputs, under the prediction error being $\infty$.** In contrast, our experiments generate travel requests from distributions that closely resemble real-world scenarios, which are unlikely to encounter such extreme situations. Hence, it is likely that the empirical results do not demonstrate the theoretical bound of robustness.
Furthermore, we want to explain that, by our experimental setup, **the prediction error $\eta$ is not $\infty$ even when the perturbation probability is $1.0$**. Note that the random noise added is sampled from the same distribution used for generating ticket prices. Taking Fig. 29 as an example, when $T = 10$ and the perturbation probability is $1.0$, the expected prediction error $\eta$ is $500$, which is the sum of 10 samples of the noise distribution with a mean of 50 (equal to the ticket price distributions). On the other hand, $\gamma = C / (1 - \beta) = 500$. Let's assume the actual prediction error sampled from distribution is exactly its expectation, i.e., $\eta = 500$. Then, by equation (10), the competitive ratio for the case of perturbation probability = $1.0$ in Fig. 29 is $(4-\beta) / (1+2\beta) = 3.8/1.4 \approx 2.714$, much smaller than $1/\beta = 5$.
In fact, the instance that leads to the theoretical worst-case scenario $1/\beta$ is an **arbitrarily dense** sequence of requests with small prices (the price of each request $\ll \gamma$). However, the experiments consider a discrete timeline (a widely used approach in numerical experiments), which makes it difficult to construct extreme instances. This is one of the reasons why we rarely observe the worst competitive ratio in the experiments.
---
Rebuttal 2:
Comment: Thank you for the clarification. In light of comments from other reviewers and your responses, I am inclined to keep my positive score.
---
Rebuttal Comment 2.1:
Comment: We are really excited to learn that you have recognized our responses. Thanks for your support for our work! | null | null | null | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
EGonc : Energy-based Open-Set Node Classification with substitute Unknowns | Accept (poster) | Summary: This paper introduces a new method EGonc for open-set node classification in graphs. EGonc performs open-set classification by incorporating energy-based models into the graph learning diagram. The training of the model only requires in-distribution data, while out-of-distribution training data are generated synthetically. In the experiments, the authors compare EGonc against a wide range of baseline methods on several benchmarks and provide ablation studies.
Strengths: 1. The paper focuses on open-set node classification in graph learning, which is an important problem in real-world applications.
2. Incorporating energy-based models into graph learning is a new method that has not been explored.
3. The proposed method shows good performance, beating a wide range of baselines as is shown in the experiments.
Weaknesses: 1. An important claim made in the paper is not supported. The paper claims that the proposed method "has nice theoretical properties that guarantee an overall distinguishable margin between the detection scores for IND and OOD samples", but there is no theoretical analysis supporting this claim.
2. The experiments are not sufficient. First, using one class as the out-of-distribution (OOD) class is very different from the real-world scenario where OOD examples can be quite diverse. Second, the paper does not compare against several important OOD detection baselines[1][2]. The proposed method has a negative impact on the in-distribution accuracy, while the methods in [1][2] will not. Third, the error bars are not reported. Fourth, an ablation study for energy propagation is missing, which seems to be an important component of the method.
3. The presentation needs major improvement. First, there are a lot of format issues (e.g., missing reference in line 127, texts in the equations are not corrected formatted). Second, the intuitions behind $l_2$ and $l_3$ are not well-explained. It is not clear what are the roles of the two losses. It would be more helpful to use examples to show the intent of the two losses. Same for equation 8 and 9. Third, Proposition 1 uses vague descriptions, which is not valid for a mathematical statement.
[1] Liu, Weitang, et al. "Energy-based out-of-distribution detection." Advances in neural information processing systems 33 (2020): 21464-21475.
[2] Sun, Yiyou, et al. "Out-of-distribution detection with deep nearest neighbors." International Conference on Machine Learning. PMLR, 2022.
Technical Quality: 1
Clarity: 1
Questions for Authors: 1. What's the intuition behind the unknown substitute generation formula (equations 8 and 9)? What is the intuition of including $l_2$ and $l_3$ into the learning objective? Can you use examples to explain them?
2. In line 127, the second kind of substitute is generated from nodes with low classification confidence. Would that also be affected by the over-confident problem?
3. In equation 14, how the $\lambda$'s are selected?
4. I suggest moving line 155-160 (the introduction of GNN) to the background section.
Confidence: 4
Soundness: 1
Presentation: 1
Contribution: 2
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **W1:** An important claim made …
**A-W1:** We support the claim through three propositions. The defined propositions can be found in Section 3, and the proofs can be found in Appendix C. In proposition 1, we proved loss $l_1$ and $l_2$ would result in lower energy scores for the model's outputs on training data, i.e. IND data, by taking the partial derivatives of the model parameters $\theta$ with respect to these losses. In Proposition 2, we prove that the overall energy scores of known class samples decrease during the energy propagation, while the overall energy scores of substitute nodes increase, implicitly enlarging the decision boundary between IND and OOD samples. In proposition 3, we prove that loss $l_3$, i.e. the energy regularization loss, reduces the energy scores for IND data and increase the energy scores of OOD data, having the same effect as $l_1$ and $l_2$,and leading to a decrease in the energy scores of IND data. With these three propositions, the proposed method can guarantee an overall distinguishable margin between the detection scores for IND and OOD samples. Thank you. We will add more analysis and explanation in the revision, to improve the clarity.
**W2:** The experiments…
**AW2-1:** We have conducted comprehensive experiments, including near open-set classification with one unknown class (Table 1 and 7) or multiple unknown classes (Table 6), and far open-set classification with multiple unknown classes (Table 3). We also discussed the experiments of different settings in terms of inductive learning (Table 1 and 6) and transductive learning (Table 7). In addition, we also conducted ablation study (Table 2), parameter analysis (Figure 1\&2), and backbone architectures (Table 8).
**AW2-2:** For the papers given here ([1][2]), the main task of these articles is OOD detection, which is a different task with open-set node classification. The goal of OOD detection is to identify the unknown class samples, which is a binary-class classification problem, i.e., a test sample is an OOD sample or not. However, open-set classification requires to identify unknown class samples and classify the known class samples. It is a multiple-class classification problem, which is more difficult then OOD detection. Thus, OOD detection methods cannot be directly applied to open set classification. And this is the main reason we did not include OOD detection methods as baselines. In addition, the proposed method does not reduce IND accuracy much. As shown in Table 5, the average performance across five datasets shows that, compared to the closed-world method GCN, the IND classification accuracy of our method only decreases 3.2\% ( from 79.5\% to 76.6\%), while the accuracy in OOD has been improved from 0\% to 76.1\%.
**AW2-3**: Thanks. Many classic articles [3][4] in this task do not include error bars, and we followed their format. But we agree that error bars can provide more detailed information, and we will add them in the revision.
[3] M. Wu, et al. Openwgl: open-world graph learning for unseen class node classification. Knownledge and Information Systems, pages 1-26 2021.
[4] L. Shu, et al. Doc:Deep open classification of text document. Empirical Methods in Natural Language Processing, pages 2911-2916, 2017.
**AW2-4** : Thanks. We have conducted this ablation study, as shown in Table 2, i.e. the first two versions without loss $l_3$. Comparing the results of the $l_1$+$l_2$ and $l_1$+$l_2$+$l_3$, we can see that the energy propagation module contributes to the improvement of the model performance, which demonstrates its significance of as an important component.
**W3:** The presentation…
**AW3-1:** We will address the format issues, and carefully proofread the whole article to improve the overall presentation.
**AW3-2:** The intuition of $l_2$ is related to data generation process. It ensures that the known class samples are sufficiently close to their own class center in the output space.
The intuition of $l_3$ is to control the energy values of known class samples being low, while keeping the energy values of substitute nodes being high, thereby allowing the energy model to remain in an optimal state where it can function effectively. Thus, the role of the loss function $l_3$ is to adjust the energy scores and keep the energy model in an optimal state.
The intuition of Eq. 8 is to generate substitute node at the boundary between two different known classes. For example, we can draw a line between a node from class A and a node from class B, then take a node near the center of this line as constructed inter-class unknown substitute.
The intuition of Eq. 9 is to generate substitute nodes located at the periphery of known classes. The goal is to enhance the model’s ability to recognize unknown classes located at the periphery. We achieve this by selecting class centers of known classes and peripheral nodes, with semantic and structural criteria, then mapping them to generate substitute nodes located at the peripheral of known classes.
**AW3-3:** We will revise it and make it more concise and consistent with mathematical statement. Thanks,
**Q1:** What's the intuition behind…
**A1:** Please refer to **AW3-2** .
**Q2:** In line 127, the second kind of substitute…
**A2:** Thanks. Overconfidence has very slight effect on the generation of our second kind of substitute nodes, since we only need a part of unconfident nodes to help the generation, and we do not require all the unconfident nodes. Thus, there is redundant space to allow the existence of nodes which should be unconfident but are misclassified with overconfidence.
**Q3:** In equation 14…
**A3:** We used a grid search to determine the values of these two hyperparameters. The results are given in E.9 in the Appendix.
**Q4:** line 155-160…
**A4:** We will revise it. Thanks.
---
Rebuttal Comment 1.1:
Comment: Thank you for your detailed response to the concerns raised. I appreciate the effort you've put into addressing each point.
After reviewing your explanations, I will increase my score, as many of my concerns have been addressed. However, I hope the presentation can be improved further to enhance the clarity of your work.
---
Reply to Comment 1.1.1:
Comment: Thank you so much for your valuable comments and suggestions on our manuscript. We appreciate your expertise and careful consideration. And we will improve the presentation further and enhance the clarity of our work. Thanks again. | Summary: This paper proposed a new energy-based generative open-set node classification method to achieve open-set graph learning. It uses energy-based models to estimate the underlying density of the seen classes and to evaluate the probability of a given input belonging to the IND classes or OOD classes. Besides, it mimics the distribution of real open-set classes by generating substitute unknown samples under the guidance of graph structure. Besides, the proposed method has nice theoretical properties that guarantee an overall distinguishable margin between the detection scores for IND and OOD samples.
Strengths: 1)This paper studies a significant and interesting problem, and the method can be used in a wide range of real-world applications.
2)The paper is overall well motivated. The proposed model is reasonable and sound. Theoretical analysis is performed.
3)Comprehensive experiments, including near open-set classification, far open-set classification, ablation studies and parameter learning, are conducted. The results demonstrate good performance.
Weaknesses: 1) What is difference between open-set node classification and out-of-distribution detection problem? The authors should illustrate the differences and whether the proposed method can solve these two problems simultaneously.
2) In section 3.1, it claims that the insouciantly selected unrelated data does not help the open-set classification, thus substitute unknown nodes near the class boundaries are generated. Why randomly selected data does not help? Aren’t they real-world open-set data which can help the open-set learning?
3) It is a good idea to imitating data from open-set distribution by generating pseudo out-of-distribution samples, however the diversity of the generated data is essential. The method used in the paper is manifold mixup, does this method can ensure the diversity of the generated data?
Technical Quality: 3
Clarity: 3
Questions for Authors: See Weaknesses part.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: See Weaknesses part.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Review 3:**
**Weakness 1:** What is difference between open-set node classification and out-of-distribution detection problem? The authors should illustrate the differences and whether the proposed method can solve these two problems simultaneously.
**Answer W1:** Thanks for your suggestion. We will take this into account and emphasise it in the next version.
OOD detection is to identify the unknown samples that the model did not learn during the training, which is a binary-class classification problem, i.e., a test sample is an OOD sample or not. Open-set classification requires to identify unknown class samples and classify the known class samples, i.e. it is a multiple-class classification problem, which is more difficult than OOD detection problem. Thus, OOD detection methods cannot be directly applied to the open set classification scenarios. We will add these content in the revision.
**Weakness 2:** In Section 3.1, it claims that the insouciantly selected unrelated data does not help the open-set classification, thus substitute unknown nodes near the class boundaries are generated. Why randomly selected data does not help? Aren't they real-world open-set data which can help the open-set learning?
**Answer W2:** Thanks for your question. Randomly selected, unrelated data typically does not contain feature information relevant to known classes, providing no boundary confidence to help the model establish classification boundaries. This effectively introduces noise into the open-set classification task, interfering with the model's learning process. In contrast, the unknown class nodes constructed near the class boundaries can provide rich boundary information for establishing classification boundaries, thereby helping the model to understand and recognise potential unknown class categories, improving the model's robustness and classification performance. Although these randomly selected unrelated data may belong to open-set data in the real world, they do not always contain information we need to support open-set learning. In particular, open-set learning relies more on samples that can provide boundary information to help the model determine decision boundaries and narrow the decision space.
**Weakness 3:** It is a good idea to imitating data from open-set distribution by generating pseudo out-of-distribution samples, however the diversity of the generated data is essential. The method used in paper is manifold mixup, does this method can ensure the diversity of the generated data?
**Answer W3:** Thanks for your question. Diversity of generated data is critical when creating pseudo-unknown class points. The construction of clear and unambiguous decision boundaries depends on the high quality and diversity of the generated data. Manifold Mixup is a method that performs data mixing in manifold space, aiming to generate new pseudo-samples by linearly mixing samples. While Manifold Mixup can increase data diversity, it also depends on the mixing strategy used and the characteristics of the data itself. In future work, we will consider the possibility of combining Manifold Mixup with other methods, such as data augmentation and adversarial sample generation, to further ensure the diversity of the generated data. | Summary: This paper focuses on energy-based open-set node classification, and adopted a generative method to obtain the explicit specific score of a node belonging to the ‘unknown‘ class. To generate good substitute unknowns, it adopted energy-based models to estimate the density of classes and guarantee the nice theoretical properties of the proposed method with an overall distinguishable margin between the detection scores from IND and OOD samples. Extensive experiments are conducted and show superior performance.
Strengths: (1) The paper is well-structured and overall easy to follow.
(2) The proposed method can obtain an explicit specific score of belonging to OOD data for each input, which provide more information and show the confidence of the model on its decision.
(3) The proposed method has nice theoretical properties that guarantee an overall distinguishable margin between the detection scores for IND and OOD samples, the adopted energy regularization loss has consistent effects with the cross-entropy loss as well as with the tailored complement entropy loss on the known classes, and that they are not mutually exclusive.
(4) The proposed method is agnostic to specific GNN architecture, which demonstrate its generalization for some content.
Weaknesses: (1) In the main part of the paper, some important experiment settings are missing, which makes the readers feel confused about the motivations of the experiments.
(2) What are the differences between near open-set classification and far open-set classification? Why do they matter? If a method can achieve good far open-set classification, it should also be good at near open-set classification?
(3) For section 3, the model design is quite complex, the illustration should be more straightward and make each term and each symbol has enough explanation.
(4) Some sentences should be improved to make them clearer, such as line 271-273: “Inspired by the Elastic Network Zou and Hastie [2005], Friedman et al. [2010], and considering the similarity in form and function between regularization terms and corresponding error terms…”
Technical Quality: 3
Clarity: 3
Questions for Authors: See above.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Review 2:**
**Weakness 1:** In the main part of the paper, some important experiment settings are missing, which makes the readers feel confused about the motivations of the experiments.
**Answer W1:** Thanks for your comment. A more detailed experimental setting can be found in **subsection E.4** in **Appendix**. Due to space limitations, we have simplified and shortened this part in the main context. We will move the important details from Appendix in the revision.
**Weakness 2:** What are the differences between near open-set classification and far open-set classification? why do they matter? If a method can achieve good far open-set classification, it should also be good at near open-set classification?
**Answer W2:** Thanks for your question. Near open-set classification is to identify the fine-grained OOD samples while also classify the known class samples. Far open-set classification is to identify the coarse-grained OOD samples and also classify the known class samples. In near open-set classification, near ood data may be similar to some of the known classes, thus it requires the model to discern between these easily confusing categories, and learn very clear and specific boundaries for known classes. In comparison, far ood data normally significantly differ from known classes. It matters since recognizing categories that are significantly different from the training data is essential to ensuring model robustness. In some content, we think near open-set classification is more challenging since it requires learning very precise class boundaries to distinguish between fine-grained differences in categories.
**Weakness 3:** For section 3, the model design is quite complex, the illustration should be more straightward and make each term and each symbol has enough explanation.
**Answer W3:** Thanks for your suggestion. We will improve the illustration of **Section 3** and make each term and symbol explained more clearly.
**Weakness 4:** Some sentences should be improved to make them clearer, such as line 271-273:"Inspired by the Elastic Network Zou and Hastie[3005], Friedman et al.[2010], and considering the similarity in form and function between regularization terms and corresponding error terms..."
**Answer W4:** Thanks for your suggestion. We will revise these sentences and improve the whole paper in terms of writing.
---
Rebuttal Comment 1.1:
Comment: Thank you for the response, which solves most of my concerns, and I will maintain my score on this paper.
---
Reply to Comment 1.1.1:
Comment: Thank you so much for your valuable comments and suggestions on our manuscript. We appreciate your expertise and careful consideration. We will improve the manuscript further as you suggested. Thanks again. | Summary: In this paper, the authors proposes a new generative open set node classification method (EGonc) to address the challenge of Open Set Classification (OSC) for safely deploying machine learning models in an open world. Traditional softmax-based methods are overly confident on unseen data, making them vulnerable to out-of-distribution (OOD) samples. EGonc uses graph structure to generate substitute unknowns simulating real open set samples and employs Energy-Based Models (EBM) for density estimation. The method learns additional energy logits from feature residuals to indicate OOD-ness, ensuring a distinguishable margin between in-distribution (IND) and OOD detection scores. Experimental results demonstrate the superiority of EGonc.
Strengths: 1. Novel Approach: The paper introduces a new method (EGonc) for open set classification that leverages energy-based models, which is a significant departure from traditional softmax-based methods.
2. Theoretical Guarantees: EGonc comes with strong theoretical properties that ensure a distinguishable margin between in-distribution and out-of-distribution samples.
3. Robustness to OOD Variability: By simulating real open set samples and learning virtual OOD classes, the method enhances robustness against the diversity of OOD examples.
Weaknesses: 1. The motivation primarily focuses on the limitations of softmax-based neural networks without delving deeply into the broader implications and potential impact of improved open set classification across various domains and real-world applications.
2. The advantages of the proposed method could be strengthened by providing more concrete examples of practical scenarios where current OSC methods fail and how EGonc specifically addresses these failures, making a stronger case for its real-world applicability and importance.
3. While the method shows promise, its applicability and performance in domains outside of those tested in the experiments need further validation.
Technical Quality: 3
Clarity: 2
Questions for Authors: 1. Why were these datasets chosen, and what advantages do they offer in validating the proposed method?
2. Why was the 2017 GCN by Kipf and Welling chosen as the backbone neural network for experimental evaluation, instead of using other models?
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: See the weaknesses.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Review 1:**
**Weakness 1:** The motivation primarily focuses on the limitations of softmax-based neural networks without delving deeply into the broader implications and potential impact of improved open set classification across various domains and real-world application.
**Answer W1:** Thanks for your comment. A majority of neural network based classification models use softmax to achieve classification and that is why we mainly focus on softmax-based neural networks. As shown in experiments, in various fields and practical applications, our method can improve the adaptability and generalization of the models in terms of open-set classification scenarios, compared to softmax-based neural networks for closed-world classification, thereby broadening their application scope. In addition, in **Table 1** of the main text, we also include sigmoid-based neural networks (GCN\_sig and GCN\_sig\_$\tau$) as baselines.
We will add these analysis into the revision.
**Weakness 2:** The advantages of the proposed method could be strengthened by providing more concrete examples of practical scenarios where current OSC methods fail and how EGonc specifically addresses these failures, making a stronger case for its real-world applicability and importance.
**Answer W2:** Thank you for your suggestion. We will specifically present the bad cases of EGonc, $\mathcal{G}^2pxy$ and GNNSAFE across five datasets in the revision, and we will visualise their open set node classification results in the Appendix, make the differences between the three models more intuitively.
**Weakness 3 \& Question 1:** While the method shows promise, its applicability and performance in domains outside of those tested in the experiments need further validation. Why were those datasets chosen, and what advantages do they offer in validating the proposed method?
**Answer W3 \& Q1:** Thanks for your comment and your question. In the field of open-set node classification on graphs, these datasets are the most commonly used. For example, [1] conducted experiments using Cora, Citeseer, Dblp, and Pubmed; our experiments mainly followed their setup and additionally included the obgn\_arxiv dataset to evaluate the model's performance on large-scale data. Specifically, these datasets have the following advantages: First, these datasets (Cora, Citeseer, Dblp, Pubmed, and Ogbn\_arxiv) are widely used in the academic community, are publicly available and accessible, and have a high degree of representativeness and ubiquity, which allows for a fair and equitable measurement of model performance and enhances the transparency and credibility of research. Second, the datasets come from a variety of sources, including computer science (Citeseer, Dblp), medicine (Pubmed), and arxiv preprints (Ogbn\_arxiv), which allows us to validate the applicability and robustness of the proposed methodology in different domains. Third, they have various dataset sizes, including small datasets (Cora, Citeseer), medium datasets (Dblp, Pubmed) and large datasets (Ogbn\_arxiv), which allow us to evaluate the performance of the models with different dataset sizes.
[1]: Man Wu, Shirui Pan, and Xingquan Zhu. Openwgl: open-world graph learning for unseen class node classification. Knownledge and Information Systems, pages 1-26 2021.
**Question 2:** Why was the 2017 GCN by Kipf and Welling chosen as the backbone neural network for experimental evaluation, instead of using other models?
**Answer 2:** Thanks for your question. Actually, the proposed method, EGonc, is agnostic to specific GNN architectures and demonstrates robust generalization capabilities. We conduct experiments on different GNN architectures, including GCN, GAT and GraphSage. The results are shown in **Table 8** in **Appendix E.10**, which have confirm the effectiveness and generalization ability of EGonc for open-set node classification.
Considering the space limitation, we only illustrate the experimental results in terms of GCN-based EGonc in the body of the article, since GCN is one of the most widely used and one of the most representative GNNs.
---
Rebuttal Comment 1.1:
Comment: Thank you for your detailed response. I will keep my positive rating.
---
Reply to Comment 1.1.1:
Comment: Thank you so much for your valuable comments and suggestions on our paper. We appreciate your expertise and careful consideration. And if we have addressed your major concerns, we would like to kindly request if you could increase your score of our paper. We will improve the manuscript further as you suggested. Thanks so much. | null | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Global Rewards in Restless Multi-Armed Bandits | Accept (poster) | Summary: The authors study restless multi-arm bandits where we do not observe separate reward for individual arms and instead observe a global reward which is the sum of reward across arms. They propose Linear and Shapley-Whittle indices, which extend classical Whittle indices (designed for settings where separate reward for individual arms are observed) into the global reward setting and establish approximation bounds. The proposed linear-Whittle policy computes marginal reward for each arm, and the proposed Shapley-Whittle policy computes Shapley values for each arm. To address nonlinear reward functions, the authors propose adaptive policies that takes into account inter-arm interactions. They combine MCTS with Whittle indices to look beyond greedy selection.
Strengths: The authors prove the difficulty of finding solutions to RMABs with global reward through reduction from RAMB and reduction from submodular optimization. They establish performance bounds for index-based policies and include discussions on the takeaways of the bounds.
A comprehensive set of policies and baselines are compared in experimental evaluations. Results on both synthetic and real-world data showcase the strength of proposed approaches. The real-world setting nicely motivates the global reward RMAB problem.
Weaknesses: For approximation bounds in section 4.2, the authors briefly describe proof techniques and implications of the results. It would strengthen the paper to elaborate more on technical novelties / contributions in the proofs.
The adaptive policies are stronger than pre-computed index-based policies, which have exhibit poor performance due to their inability to consider inter-arm interactions. However, the theoretical results focus on linear and Shapley-Whittle indices.
Technical Quality: 4
Clarity: 4
Questions for Authors: Could the theoretical results be modified and applied to adaptive approaches, or have implications in adaptive approaches? If so, it would be nice to have a discussion in the paper.
The authors focus on scenarios where the global reward is monotonic and submodular in actions. Such rewards have diminishing returns as extra arms are pulled. Could the authors discuss potential ways to apply/modify proposed approaches to settings where the global reward is not monotonic and submodular in actions? This is mentioned in limitations section, but it could improve the paper to at least point out potential ways to apply proposed methods in more general settings.
Confidence: 4
Soundness: 4
Presentation: 4
Contribution: 4
Limitations: The authors discussed the limitations and impact of the work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer a47J,
We thank you for your comments and suggestions. These comments will greatly help improve our final manuscript. We appreciate that you find our empirical evaluation comprehensive, and find that our empirical results strengthen the validity of our policies. We are additionally glad that our real-world applications help motivate the RMAB-G problem. We provide answers to each of your questions below:
### Novelties + Contributions in Approximation Bounds
We thank the reviewers for inquiring about our proof techniques and the novelties present there. Our key technical insight is a novel method to bound the performance of Linear- and Shapely-Whittle policies using similarities between the global rewards and the rewards of an induced RMAB. Such a technique provides insight into how approximation guarantees can be derived for other variants of RMABs. To quantify the similarity between the rewards of Linear- and Shapley-Whittle and the rewards of the induced RMAB, we use properties of submodular functions to upper bound the reward from Linear- and Shapley-Whittle. We plan to emphasize these contributions more in the camera-ready version of our paper.
### Non-Monotonic Rewards
We thank the reviewers for bringing up this interesting question. Following the reviewer's suggestion, we empirically tested our methods with non-monotonic. In the supplemental PDF, we attach a Figure that evaluates the performance of our policies under two non-monotonic rewards. The first reward is the minimum function, where we assume that the minimum of an empty set is 0, and the second is the linear reward, but with negative rewards for some arms. We compare the performance of our policies for both 4 and 10 arms, and evaluate this across 15 seeds.
MCTS Shapley-Whittle performs best for the negative linear reward, which follows the same trend seen for monotonic rewards. Such results follow intuition, as for non-monotonic rewards, the impact of one arm upon the rewards of other arms is more complex, requiring the use of adaptive algorithms. For the minimum function, we find that MCTS-based methods perform best for N=10, while for N=4, we find that our index-based policies perform near-optimally. We believe that a more systematic investigation of non-monotonic rewards poses an interesting direction for future research, and thank the reviewer for bringing this up.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for the response. The authors nicely summarized the novelties and contribution in approximation bounds, which improves the presentation. The authors also pointed to empirical results in appendix across various settings under monotonic rewards, which improves the contribution. I increased the confidence in assessment of this work. | Summary: This paper studies RMAB (restless multi-armed bandits) with global
rewards. Standard RMAB assumes that the total reward is a sum of the
rewards across arms. Even though this type of RMAB incurs
curse-of-dimensionality, the linear reward structure allows Whittle
index to be defined. In contrast, in this paper the total reward has an
additional term that is submodular but non-linear. The authors first
propose two linear approximations of the global reward, through which
the Whittle policy can then to used. The approximation bounds of the
resulting linear-Whittle policy and Shapley-Whittle policy are provided.
Since these approximation bounds can be poor, the authors further
provide an iterative linear approximation (iterative linear-Whittle
policy), although the latter does not have performance guarantees.
Strengths: 1. RMAB with global reward has not been studied in the literature.
2. Some approximation guarantees are provided for the Linear-Whittle and
Shapley-Linear Whittle policies.
Weaknesses: 1. The provable approximation guarantees can be loose (as low as $1/K$,
where $K$ is the number of arms), unless the non-linear term is
insignificant.
2. The iterative linear-Whittle policy, which is meant to achieve better
performance, does not have performance guarantees.
3. The presentation of the iterative linear-Whittle policy could be
improved.
Technical Quality: 3
Clarity: 2
Questions for Authors: 1. What is the significance of Theorem 3? The reviewer is confused
because only ensuring that $\hat{a}$ attains a certain fraction of the
optimal $Q$-function does not seem to imply that the long-term reward is
guaranteed to be lower bounded.
Also, it seems that in this theorem the argument of $P_i$ for the state
is either 0 or 1. Why are there only two states?
2. In Example 4.1, how do you know that the approximation ratio is 1/2
by only examining one $a$ and $s$. Shouldn't you take the minimum over
all $a$ and $s$?
3. The Definition 4 (Iterative Linear-Whittle) is a bit confusing. The
iterations were never introduced. It seems that $X$ is the decision
from the previous iteration, is that right? It would be better to state
the algorithm in a pseudo code with iterations.
4. Similarly, Section 5.3 is difficult to follow. It is unclear to the
reader where/how/why Monte Carlo Tree Search is combined with the
iterative algorithm in Definition 4.
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: Limitations are discussed in Section 8.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer t2ns,
We thank you for your suggestions and comments on our paper, and these will greatly help improve our final manuscript. We are glad that you appreciate how RMAB-G has not been previously studied in the literature. Additionally, we are happy that you appreciate the approximation guarantees we provide for our Linear- and Shapley-Whittle policies. We provide answers to each of your questions below:
### Looseness of the 1/K Guarantee
We thank the reviewers for bringing this up. While the 1/K guarantee can be loose in some situations, we demonstrate in Lemma 8 that there exist reward functions for which the 1/K guarantee is tight. To provide a tighter guarantee, in Theorems 4 and 5, we provide lower bounds for the performance of Linear- and Shapley-Whittle dependent on the value of $\beta\_{\mathrm{linear}}$ and $\beta\_{\mathrm{shapley}}$ respectively.
### Iterative Linear-Whittle Guarantee
We thank you for bringing up this question, as it is an interesting and natural question. Proof techniques used to bound the performance of the Linear- and Shapley-Whittle policies cannot be directly applied to adaptive methods because adaptive methods lack an analog for the "induced linear RMAB" or "induced Shapley RMAB". To get around this limitation, we develop new performance bounds for the iterative Linear-Whittle policy and show that a) in some scenarios, the iterative Linear-Whittle policy achieves no better than $\frac{1}{1+(K-1)\gamma}$ of the optimal average reward, and b) for any choice of reward and transitions, the iterative Linear-Whittle policy achieves at least $\frac{1}{K}$ of the optimal reward. We introduce formal theorem statements and proof sketches in the global rebuttal.
### Theorem 3 Significance
We thank the reviewers for bringing up the purpose of Theorem 3. We agree with the reviewer that Theorem 3 does not guarantee the long-term reward is bounded. Theorem 3 demonstrates that it is possible to achieve a 1-1/e approximation to the Q value, which serves as an initial attempt at solving the problem. However, in addition to the algorithm not guaranteeing a long-term bound on the reward, such a method is computationally intractable, as it requires exponential time to learn such a Q function. Theorem 3 motivates the need for computationally feasible algorithms with long-term reward guarantees, as initial reinforcement learning-based solutions fail to solve the problem. We plan to clarify this further in the camera-ready version of our manuscript.
### Example 4.1 Approximation Ratio
In Example 4.1, we show that our Linear-Whittle policy achieves a 1/2 approximation. We demonstrate this by first computing an upper bound on the optimal reward; we do so by upper bounding the individual contribution from each arm ($p\_{i}(s\_{i})a\_{i})$), then maximizing this by summing over arms ($\sum\_{i=1}^{N} p\_{i}(s\_{i})a\_{i})$). This allows us to find that $\mathrm{OPT} \leq 6$ . We then compute the arms which are pulled by the Linear-Whittle policy, and find that the reward for this is 3. Therefore, the Linear-Whittle policy achieves at least 1/2 of the optimal reward. We plan to add more details to our analysis in Example 4.1, and include these details in the paper.
### Explaining Iterative Linear-Whittle
We thank the reviewers for pointing this out, and the reviewer is correct that $X$ are the decisions from previous iterations. To improve clarity, we plan on including pseudocode in the camera-ready version of the manuscript.
### Explaining MCTS with iterative algorithms
We include details on our MCTS algorithm in Algorithm 1 of the Appendix, but plan to incorporate this into the main paper and include further details. At its core, our MCTS algorithm computes a variant of the Linear-Whittle index for different actions. Each node in MCTS represents a partial selection of arms, and children nodes indicate new possible arms to be selected. During a rollout, we select additional arms to pull for the current timestep, and after this, we have a candidate set of arms to pull corresponding to an action $\mathbf{a}$. We can compute the exact global and local reward using this and use index-based approaches to estimate the future reward for the action $\mathbf{a}$. We combine these two values to assess a total reward associated with the action $\mathbf{a}$, and repeat this procedure across different combinations of arms.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response! I think I will keep my current score. Regarding Example 4.1, your analysis seems to assume on a=[1,1,0,0] and s=[1,1,1,1]. By (4), shouldn't you consider all a and s, not just this pair?
---
Reply to Comment 1.1.1:
Title: Thank you for your response
Comment: Thank you for your comment! To answer your question, for brevity, we displayed the minimizing pair of a and s (which are a=[1,1,0,0] and s=[1,1,1,1]), but you are correct in that all pairs a and s should be considered to compute $\beta$. We plan to update our manuscript to further specify and clarify this. | Summary: Conventional RMAB problems model the rewards with respect to individual arms. Authors claim that for some instances (such as food rescue), a global non-separable reward function exists, because of which solutions to RMAB cannot be applied to such problem.
To address this NP-hard problem, authors modify Whittle index policy for linear cases, and propose two adaptive policies for nonlinear cases.
Theoretical results (competitive ratio) are provided for modified index policy, and experiments show the performance of proposed algorithms.
Strengths: This work opens a new topic that hasn’t been explored in RMAB, where there is a global utility function.
The authors provide theoretical bounds for linear and shapley whittle index policies.
The experiment part explores different combinations of policies and reward functions. Both synthetic and real traces are simulated to support the superior of proposed algorithms.
Weaknesses: 1. The major weakness I’m thinking of is the model itself. I might need more input from the authors to clarify why RMAB-G is required as a general model, instead of some specific scenario. The authors start with food rescue problems, that the platform wants to maximize the probability of one rescue task completion, while maintaining high engaging levels.
This model makes sense to me, but when I’m trying to understand other examples, such as peer review, I cannot find a good explanation for the local arm reward. The state of arm(reviewer) here is if they are available, so what will be the local arm reward? Maximize the availability of the reviewers?
The authors have mentioned several other applications for RMAB-G such as blood donation, emergency dispatch, could you clarify how these applications model global and local rewards?
Can the authors provide a general requirement for the models that need RMAB-G?
2. I’m also having a hard time understanding some of the global reward settings in section 6.1. More specifically the necessity of global reward function.
For example, for linear reward function, if we set the individual reward for each arm as a very naïve m_i s_i a_i + s_i, the solution for maximizing this reward for each arm seems to generate the largest global reward function? The same holds for max global reward, that we can just set the local reward as m_i s_i a_i.
3. Another question that I have is, it seems that MCTS requires local searching, which is a relatively complicated algorithm. Given known transition probabilities, Whittle index can be computed (or approximated), then rest of the algorithm is just looking up the index tables. I understand that proposed algorithms have better reward performance, but what is the time/ complexity sacrifice?
Technical Quality: 3
Clarity: 2
Questions for Authors: Please refer to weakness for my major concerns.
Some minor questions:
1. Line 84, Shouldn’t it be R(s,a) = R_glob plus \sum R_i instead of times?
2. Line 103, To clarify, Whittle index policy is optimal in asymptotic regime.
3. For experiments, how many time slots (food rescue tasks) are deployed to measure the average reward? Also have you duplicated arms for the experiments, or the proposed algorithms can achieve very close to optimal reward even for 4 different arms?
I’m having this question because based on my previous experience with RMABs, Whittle index may perform a bit far from optimal if there is only 1 duplicate for each type of arm. This question also corresponds to previous question that index policies are optimal in asymptotic regime.
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: My scores are based on the concern of model itself, as well as the necessity of global reward function in some circumstances. If my statement is wrong or inaccurate, I’m willing to adjust the scores accordingly.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer oZCz,
We thank you for your suggestions and these comments will help improve our final manuscript. We appreciate that you view our work as “opening a new topic” and are glad that you recognize our theoretical bounds. Additionally, we are happy that you appreciated our empirical results and found that they supported our proposed algorithms. We provide answers to each of your questions below:
### RMAB-G Model in Peer Review
We thank the authors for bringing up their questions about applying RMAB-G to the peer review setting. If peer review institutions focus exclusively on review quality, then the local reward could be 0. Such a scenario is allowed within our model and all performance guarantees still apply to such a situation. We empirically analyze situations with zero local reward in Appendix I.
### Global and Local Rewards in Blood Donation and Emergency Dispatch
The volunteer emergency dispatch situation parallels the food rescue scenario for both the global and local reward. In volunteer emergency dispatch, volunteers are notified about emergencies, to have volunteers assist with an emergency (an example of this can be seen through the app PulsePoint). The global reward is the probability that any volunteer helps out with an emergency, which corresponds to the probability reward. The local reward corresponds to engagement; similar to food rescue, emergency dispatch platforms want to ensure high engagement rates from their volunteers.
In blood donation, the objective is to match blood donors to blood donation opportunities [1]. Local rewards in this situation correspond to the number of active blood donors. Global rewards could refer to the total amount of blood donated weighted by some fairness constraint proportional to the squared difference between rural and non-rural hospitals [1]. Alternatively, there could be other domain-specific considerations that play into the choice of local and global rewards.
### General Model Requirements
To apply our proposed algorithms, we require two things of the reward function: a) it is submodular in the arms pulled, meaning that the marginal gain for pulling an arm is diminished as additional arms are pulled simultaneously, and b) it is monotonically increasing in the arms pulled (pulling more arms cannot decrease the reward). Submodular monotonic reward functions are common assumptions for set functions and arise naturally in many situations including explainability [2], reinforcement learning [3], and economics [4].
### Global Rewards for Maximization
We thank the reviewer for this question, and plan to include further details on the maximization reward in the paper. We agree with the reviewer that for the linear global reward, naively applying the proposed solution is optimal. However, this is not true for the maximization global reward.
For example, suppose that we have three arms ($N=3$) with a budget of two ($K=2$). Suppose that the global reward is the maximization reward, while the local rewards correspond to whether arms are in state 1: $R\_{i}(s\_{i},a\_{i}) = s\_{i}$. Suppose that $m\_{1} = 5$, while $m\_{2} = 4$ and $m\_{3} = 2$. Finally, suppose that arms 1 and 2 remain in state $s\_{1}=1, s\_{2}=1$, while arm 3 is in state 1 if pulled ($P\_{3}(1,1,1) = 1$). In this scenario, naively maximizing $m\_{i} s\_{i} a\_{i} + s\_{i}$ results in pulling arms 1 and 2, while the optimal set of arms to pull is 1 and 3. Neither pulling arm 2 nor arm 3 provides any benefit to the global reward when arm 1 is already pulled. However, pulling arm 3 improves the local reward more than pulling arm 2, so pulling arms 1 and 3 is optimal. More generally, because the maximum function is non-linear and non-separable, approximating the reward purely linearly can result in poor performance due to overestimation. We demonstrate this empirically in Figure 2, where our Linear-Whittle policy, which estimates the reward in a similar manner, performs worse than our Shapley-Whittle policy, which uses Shapley values to estimate the reward function.
### Time Tradeoff for MCTS
We agree with the reviewer that the time complexity of MCTS is an important consideration when deciding which algorithm to use. We agree that MCTS could run slowly due to the effect of local searching; we quantify this in Figure 3, where we plot how the time needed to run MCTS varies with the problem size. In practice, when N<=100, the MCTS-based algorithms can run in under 30 seconds, which is suitable for real-world applications such as food rescue (which require that algorithms run in under 1 minute).
### Experimental details for food rescue
We thank the reviewer for this question. We provide details on the number of food rescue slots in Appendix A, and plan to bring this forward into the main paper. For all experiments, we run 50 food rescue trips, and average this across 15 seeds, and 5 trials per seed. We select T=50, because when using the discounted reward, $\gamma^50 < 0.005$, and so the rewards beyond T=50 are relatively small.
### Smaller comments
We thank the reviewers for their writing suggestions, and we plan to change the typo on line 84 and correct line 103 so it specifies optimality in the asymptotic regime.
### References
[1] McElfresh, Duncan C., et al. "Matching algorithms for blood donation." Proceedings of the 21st ACM Conference on Economics and Computation. 2020.
[2] Chen, Ruoyu, et al. "Less is more: Fewer interpretable region via submodular subset selection." arXiv preprint arXiv:2402.09164 (2024).
[3] Prajapat, Manish, et al. "Submodular reinforcement learning." arXiv preprint arXiv:2307.13372 (2023).
[4] Chateauneuf, Alain, and Bernard Cornet. "Submodular financial markets with frictions." Economic Theory 73.2 (2022): 721-744. | Summary: This paper studies the popular Restless Multi-Armed Bandit (RMAB) problem and aims to tackle a key limitation – which is that, in RMABs, the rewards are assumed to be separable across arms. This is a limitation because in many scenarios, the overall reward may not simply be a sum over individual rewards of arms, but these rewards may be tied inextricably with each other.
To tackle this issue, the paper proposed the RMAB-G framework with non-separable global rewards. The paper shows hardness results on the RMAB-G problem. Further, the paper proposes index-based policies, similar to the original “whittle index” policy for regular RMABs – the two indexes proposed are the “linear whittle” and the “shapely whittle” indexes.
The paper also proves approximation bounds on the performance of these indexes and carries out empirical studies based on synthetic and real-world data to demonstrate the good performance.
Strengths: 1. I believe a key strength of the paper is proposing the RMAB-G model with a notion of non-separable global rewards. While RMABs have been extensively studied before, this formulation (and its solution) is novel to the best of my knowledge.
2. THe proposed linear and shapely whittle indexes intuitively make sense. I quite like the concept and augmentations made to the original whittle index to solve the RMAB-G problem.
3. The results presented in the paper are grounded in theory – the paper provides approximation bounds for the proposed solutions. It also proves other minor theoretical results such as hardness of the RMAB-G.
4. Empirical results look good: there are experimental results on both synthetic as well as real-world data.
Weaknesses: 1. Motivation: While the new RMAB-G framework and the provided analysis is interesting from a technical pov, the motivation for this setup / application to food rescue seems a little tortured. For instance, in line 66-67, the paper says that global rewards are needed because we cannot split the reward into per-volunteer functions. However, I’m not sure why — isn’t the total reward simply the sum of probabilities of individuals carrying our their assigned rescue tasks?
2. Scalability: My worry is that the proposed solutions may not scale well. The experiments are all run on a small number of arms. Technically, the bottleneck might come from computing the shapely index itself. It seems like an expensive step with a min over exponential number of state vectors.
3. Both the proposed index policies seem to depend on the budget K. Contrary to the regular whittle index, which is budget-agnostic, this seems a little less clean (and also perhaps a hurdle?). Is it possible that if the budget changes from K to K+1, the selected arms may change drastically because all the index values of all arms changed?
4. Some concepts are not introduced in the paper. For instance, indexability, whittle index, etc. are assumed to be common knowledge, potentially making the paper difficult to access for someone not familiar with these concepts.
Technical Quality: 4
Clarity: 3
Questions for Authors: 1. In line 54, why is whittle index defined as the min w where Q(., 0) > Q(., 1)? As far as I know whittle index is defined as the point where the two Q values become equal?
2. Are the assumptions on the reward functions (monotonic, submodular, etc.) true for the food rescue setting or the blood donation example?
3. Line 86: Is the RHS missing a +?
4. In Theorem 3, what does it mean for the function g(s) to be submodular in s? Why is this a reasonable assumption?
5. In the expression for u(s_i) in line 117, what do factorial terms stand for?
6. In computing the shapely index, how stable is the index wrt K? For example if K were to increase by 1, is it possible that the top-K arms are significantly different from the top-(K+1) arms because all the indices changed?
7. In line 123: why is it true that “this approach could lead to more accurate estimates”?
8. In Theorem 4, in the expression for \beta_{linear}, how is the R(s,a) term determined?
9. In line 230, what is the difference between a trial and a seed?
10. In first paragraph the paper cites maternal health [6] but this paper seems to have nothing to do with maternal health. Perhaps the authors wanted to cite the following:
Aditya Mate, Lovish Madaan, Aparna Taneja, Neha Madhiwalla, Shresth Verma, Gargi Singh, Aparna Hegde, Pradeep Varakantham and Milind Tambe.
“Field Study in Deploying Restless Multi-Armed Bandits: Assisting Non-Profits in Improving Maternal and Child Health”
Confidence: 4
Soundness: 4
Presentation: 3
Contribution: 4
Limitations: yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer TFrk,
We thank you for your suggestions and insights, and your comments will help improve our final manuscript. We are happy that you find our study novel, and find the RMAB-G model to be a key strength of our paper. Additionally, we are glad that you find our policies intuitive and grounded in theory. Finally, we appreciate that you think our empirical results are good, both for synthetic and real-world experiments. We provide answers to each of your questions below:
### Motivation for Food Rescue Scenario
We thank the reviewer for bringing up this important question. We note that the probability of any volunteer matching to a rescue trip is not simply the sum of their match probabilities. For example, if there are 2 volunteers, each with a match probability of 1/2, then the probability of any volunteer matching is 3/4, rather than 1/2+1/2 = 1. This is because we compute the probability that no volunteer successfully matches to a trip, which is a nonlinear function of the individual match probabilities.
### Scalability of Method
We agree with the reviewer that the scalability of our methods is important to assess the real-world applicability of our methods. In Appendix I and Appendix J, we vary the number of arms between 25 and 1000, and compare the performance of our policies. Index-based policies run quickly because the indices can be pre-computed and Shapley values can also be computed quickly because we estimate Shapley values using a subset of arm combinations. While adaptive policies, such as MCTS, run slower than index-based policies, when N <= 50, both adaptive and index-based policies can be computed in under 15 seconds, which is fast enough for real-world use cases such as food rescue. For large N, we can use index-based methods, which run in under a second per timestep for N=1000.
### Impact of Budget on Arms Selected
We thank the reviewer for bringing up an interesting question about how the budget impacts the actions chosen by our index-based policies. For the Linear-Whittle policy, all arms that are selected with a budget of $K$ are selected for a budget of $K+1$ as well (within a particular timestep), because $p\_{i}(s\_{i})$ is independent of the budget. However, for the Shapley-Whittle policy, the impact of the budget on the arms selected is more complicated because $u\_{i}(s\_{i})$ is not independent of the budget. We do not believe that the Shapley-Whittle indices should change drastically when increasing the budget because the Shapley values are computed across many combinations of arms. Intuitively, increasing the budget should change the size of each arm combination, but the average reward of the arm combinations should not change drastically.
### Introducing Indexability and Whittle Index
We agree with the reviewer that indexability and the Whittle index are important concepts that can help readers better understand our work. In the camera-ready version of our work, we plan to include additional background on these concepts to make our paper more accessible.
### Assumptions for Food Rescue and Blood Donation
We thank the reviewer for bringing up an interesting question about our reward functions in food rescue and blood donation. As mentioned in Section 3, we model the global reward in food rescue using the probability reward function, which is both monotonic and submodular. A similar reward function can be used for the blood donation setting, as notifying donors about donation opportunities can only result in more blood being donated. Additionally, notifying additional donors leads to diminishing returns, due to capacity restrictions on the total amount of blood donated.
### Submodular g function
Theorem 3 assumes $g(\mathbf{s})$ is submodular in $\mathbf{s}$. $g(\mathbf{s})$ represents the maximum reward attainable from state $\mathbf{s}$. If $g(\mathbf{s})$ is submodular, then additional arms in the 1 state leads to diminishing marginal returns. Such an assumption holds true across each of the reward functions used in our experiments, and is commonly seen when additional present arms result in diminishing returns. We plan to include a brief discussion of the meaning of the assumptions for Theorem 3.
### Better estimates from Shapley-Whittle
We thank the reviewers for pointing this out. On line 123, we state the Shapley-Whittle policy could lead to better estimates of the reward when compared to the Linear-Whittle policy. The Shapley-Whittle policy uses Shapley values to estimate the marginal contribution of each arm, while the Linear-Whittle policy overestimates marginal contributions. As a result, in many cases, the Shapley-Whittle index provides a better estimate of the reward because marginal contributions are averaged across many combinations of arms. This intuition is backed up by empirical evidence, where Shapley-based policies perform better than Linear-based policies. We plan to update our write-up to make this point clearer.
### Other Questions
We thank the reviewer for pointing out our typo on Line 86, and we additionally plan to change Line 54 to Q(.,0) = Q(.,1).
On line 117, the factorial terms arise from the original definition of Shapley values; these terms account for the number of orderings of arms for a particular action combination.
For Theorem 4, $R(\mathbf{s},\mathbf{a})$ term can be computed for any given $\mathbf{s}$ and $\mathbf{a}$. $\beta\_{\mathrm{linear}}$ can then be computed by minimizing this ratio across all choices for the action and state.
When we vary the trial, the starting state, $\mathbf{s}^{{0)}$ changes, while the transition matrices, $\mathcal{P}$, and reward parameters remain constant. However, when we change the seed, the starting state, transition matrices, and reward parameters all change.
We thank the reviewer for pointing out our misplaced citation, and we will fix this in the camera-ready. | Rebuttal 1:
Rebuttal: We thank the reviewers for their insightful comments and for taking the time to carefully read through our work. We are pleased that reviewers find our problem formulation novel (reviewers TFrk18, oZCz07, and t2ns05) and motivated by real-world applications (reviewer a47J02). We are happy to see that reviewers find our proposed policies intuitive (reviewer TFrk18) with good empirical (reviewers TFrk18, oZCz07, and a47J02) and theoretical (reviewers TFrk18, oZCz07, t2ns05, and a47J02) backing. We additionally appreciate that reviewers find our set of empirical evaluations comprehensive (reviewer a47J02). Your comments have greatly helped improve our work.
We hope to address your comments and questions in this rebuttal. We first describe a set of new approximation bounds for adaptive methods, which was discussed by multiple reviewers, while we reply individually to each reviewer for any comments they might have.
# Summary of New Additions
## Approximation Bounds for Adaptive Methods
Following feedback from reviewers a47J and t2ns, we develop performance guarantees for adaptive methods. Proof techniques used to bound the performance of the Linear- and Shapley-Whittle policies cannot be directly applied to adaptive methods because adaptive methods lack an analog for the "induced linear RMAB" or "induced Shapley RMAB".
To get around this limitation, we develop new performance bounds for the iterative Linear-Whittle policy and show that a) in some scenarios, the iterative Linear-Whittle policy achieves no better than $\frac{1}{1+(K-1)\gamma}$ of the optimal average reward, and b) for any choice of reward and transitions, the iterative Linear-Whittle policy achieves at least $\frac{1}{K}$ of the optimal reward. We formally state these theorems and provide proof sketches below.
Theorem 1: Let $\pi\_{\mathrm{IL}}$ be the iterative linear policy. Let $\mathrm{ALG} = \mathbb{E}\_{(\mathbf{s},\mathbf{a})\sim (P,\pi\_{\mathrm{IL}})}[\sum\_{t=0}^{\infty} \gamma^{t} R(\mathbf{s},\mathbf{a})]$ and $\mathrm{OPT} = \max\limits\_{\pi} \mathbb{E}\_{(\mathbf{s},\mathbf{a})\sim (P,\pi)}[\sum\_{t=0}^{\infty} \gamma^{t} R(\mathbf{s},\mathbf{a})]$. Then there exists a reward function, $R(\mathbf{s},\mathbf{a})$, and transition probabilities, $\mathcal{P}$, so $\mathrm{ALG} = \mathrm{OPT} \frac{1}{1+(K-1) \gamma}$.
Theorem 2: For any fixed set of transitions, $\mathcal{P}$, let $\mathrm{OPT} = \max\limits\_{\pi} \mathbb{E}\_{(\mathbf{s},\mathbf{a})\sim (P,\pi)}[\frac{1}{T} \sum_{t=0}^{T-1} R(\mathbf{s},\mathbf{a})]$ for some $T$.
For an \abr{rmab-g} $(\mathcal{S},\mathcal{A},R\_{i},R\_{\mathrm{glob}},P_{i},\gamma)$, let $R'\_{i}(s\_{i},a\_{i}) = R\_{i}(s\_{i},a\_{i}) + p\_{i}(s_{i}) a\_{i}$, and let the induced linear \abr{rmab} be $(\mathcal{S},\mathcal{A},R'\_{i},P\_{i},\gamma)$.
Let $\pi\_{\mathrm{IL}}$ be the iterative Linear-Whittle policy, and let $\mathrm{ALG} = \mathbb{E}\_{(\mathbf{s},\mathbf{a})\sim (P,\pi\_{\mathrm{IL}})}[\frac{1}{T} \sum\_{t=0}^{T-1} R(\mathbf{s},\mathbf{a})]$.
For any policy, $\pi$, let $\pi\_{i,t}$ be the augmented policy that pulls arms according to $\pi$ and additionally pulls arm $i$ at timestep $t$.
If $\mathbb{E}\_{(\mathbf{s},\mathbf{a})\sim (P,\pi\_{i,t})}[\frac{1}{T} \sum\_{t=0}^{T-1} R(\mathbf{s},\mathbf{a})] \geq \mathbb{E}\_{(\mathbf{s},\mathbf{a})\sim (P,\pi)}[\frac{1}{T} \sum\_{t=0}^{T-1} R(\mathbf{s},\mathbf{a})]$ and the induced linear \abr{rmab} is irreducible and indexable with the uniform global attractor property, then $\mathrm{ALG} \geq \frac{1}{K} \mathrm{OPT}$ asymptotically in $N$ for any set of transitions, $\mathcal{P}$.
Proof sketch of Theorem 1: To demonstrate an upper bound on the performance of the iterative Linear-Whittle policy, we construct a reward function with $2K-1$ arms. The optimal policy always pulls arms $1$ and $K+1,\ldots,2K-1$, while the iterative Linear-Whittle policy pulls arms $1,\ldots,K$. We construct the reward function in such a way that the reward of pulling arms $1,\ldots,K$ is $1$ for each timestep, while pulling arms $1$ and $K+1,\ldots,2K-1$ results in a reward of $1 + (K-1) \gamma$ per timestep.
Proof sketch of Theorem 2: To demonstrate a lower bound on the performance of the iterative Linear-Whittle policy, we first demonstrate that the iterative Linear-Whittle policy with budget $K$ does no worse than the Linear-Whittle policy with budget $1$. We then show that the Linear-Whittle policy with budget $1$ is at least a $\frac{1}{K}$ of the optimal reward. Taken together, this implies that the iterative Linear-Whittle policy achieves at least $\frac{1}{K}$ of the optimal reward.
Pdf: /pdf/08738c827160b8d90aaa473bfad43476ccb3b0b2.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
On the Inductive Bias of Stacking Towards Improving Reasoning | Accept (poster) | Summary: This paper examines the inductive bias of gradually stacking layers to increase the depth of a smaller model. The proposed stacking variant, MIDAS, enhances training efficiency and discovers a compelling inductive bias that boosts downstream performance, particularly in reasoning tasks.
Strengths: 1) This work is well-motivated in its aim to design a strategy for reducing the computational cost of training large models. Like some previous studies, it proposes a straightforward variant of gradual stacking that results in improved training speed and performance on reasoning tasks.
2) They established a link between the surprisingly discovered inductive bias that enhances reasoning performance and the looped transformer, which is specifically designed for such tasks. This work paves the way for further research into understanding the intriguing inductive bias of stacking layers to grow model size, not only for computational gain but also for performance improvement.
3) The paper is well written.
Weaknesses: The benchmarked models of size 1B and 2B are small compared to commonly used models like 7B, 13B etc. It’s hard to conclude yet whether this strategy will scale well. So some scaling study will show whether this work can be really a stronger candidate to replace the baselines.
Technical Quality: 3
Clarity: 4
Questions for Authors: 1) What is the reason to choose UL2 objective for training with 60% causal LM, 20% prefix LM and 20% span corruption? Do you see similar phenomena by just training with causal LM? Authors are requested to provide some ablation study and discussion about this.
2) From Table 1 we see memorization performance of MIDAS drops compared to baseline. Does the authors have any intuition why this is happening?
3) What is the performance of GRADSTACK for 2B parameters model in Table 1?
Confidence: 4
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: This work shows limited improvement on memorization-based tasks.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the feedback and many suggestions. We tried to address them below by running new experiments.
**Q1**: *The benchmarked models of size 1B and 2B are small compared to commonly used models like 7B, 13B etc. It’s hard to conclude yet whether this strategy will scale well.*
A: Based on the suggestion, we pretrained an 8B model on 100B tokens (instead of 400B tokens in the interest of time) for baseline and MIDAS. We try three learning rates to observe the effect of hyperparameters and report the results below. We include the evaluations corresponding to the categories in Table 1. We also include a new category for story completion tasks (average of LAMBADA, Story Cloze, Hellaswag) that also require reasoning from context, inspired by other reviewer suggestions. The average column now refers to an average of 18 tasks instead of 15. Overall the trends are very similar to 1B and 2B models: at 1.24x speedup MIDAS has much better task average. Interestingly, MIDAS seems a lot more robust to the choice of learning rate. We will include these 8B experiments and a longer one in the revision.
| Model | Loss (val) | Closed book QA | Open book QA | Math Word Problems | Story Completion | Task Average (18) |
|--|--|--|--|--|--|--|
| Baseline (LR=1e2) | 1.962 | 13.7 | 35.1 | 27.7 | 49.7 | 30.3 |
| MIDAS (1.26x) (LR=1e2) | 1.917 | 16.1 | 39.1 | 36.0 | 54.2 | 35.5 |
|--|--|--|--|--|--|--|--|
| Baseline (LR=3e3) | 1.911 | 16.5 | 37.2 | 30.8 | 53.2 | 33.1 |
| MIDAS (1.26x) (LR=3e3) | 1.905 | 18.2 | 39.2 | 30.9 | 54.7 | 34.3 |
|--|--|--|--|--|--|--|--|
| Baseline (LR=1e3) | 1.898 | 17.9 | 38.6 | 27.6 | 51.8 | 32.5 |
| MIDAS (1.26x) (LR=1e3) | 1.909 | 17.3 | 36.3 | 33.0 | 55.8 | 34.2 |
---
**Q2**: *What is the reason to choose UL2 objective for training with 60% causal LM, 20% prefix LM and 20% span corruption? Do you see similar phenomena by just training with causal LM?*
A: The observations that MIDAS is better than gradual stacking, and the inductive bias of MIDAS towards improved reasoning, also hold for GPT-style causal language modeling. We started with the UL2 objective and stuck with it, partly because it also uses causal LM for 60% of the data. Based on the reviewer's suggestion, we ran causal LM training for the 1B model and reported the results below. We include the evaluations corresponding to the categories in Table 1 for Causal LM training of the 1B model and also the new categories described in the response to **Q1**. The trends are very similar to the UL2 models.
| Model | Loss (val) | Closed book QA | Open book QA | Math Word Problems | Story Completion | Task Average (18) | Primitives |
|--|--|--|--|--|--|--|--|
| 1B LM Baseline | 2.43 | 12.3 | 33.9 | 30.3 | 42.8 | 30.4 | 51.6 |
| 1B LM GradStack (1.33x) | 2.43 | 10.3 | 32.1 | 22.3 | 42.3 | 26.5 | 39.9 |
| 1B LM MIDAS (1.33x) | 2.44 | 11.4 | 36.6 | 29.1 | 50.2 | 31.3 | 58.7 |
---
**Q3**: *memorization performance of MIDAS drops compared to baseline. Does the authors have any intuition why this is happening?*
A: This is an interesting open question and we attempted an initial analysis in Section 4.2, where we notice a clear inductive bias of MIDAS towards improving open book QA problems (which is closer to reasoning) more than closed book versions of the same questions (which is closer to memorization). Our speculative hypothesis is that due to the connection to looped models, MIDAS has a slightly lower “effective” number of parameters, which is known to correlate with memorization abilities [1]. This seems to be more than compensated for by the improved reasoning abilities of MIDAS. We believe this phenomenon deserves further exploration and understanding.
---
**Q4**: *What is the performance of GRADSTACK for 2B parameters model in Table 1?*
A: Below we report results for GradStack with Prop-2 schedule for the 2B model, including the new task categories.
| Model | Loss (val) | Closed book QA | Open book QA | Math Word Problems | Story Completion | Task Average (18) | Primitives |
|--|--|--|--|--|--|--|--|
| 2B Baseline | 1.926 | 15.2 | 39.1 | 27.1 | 54.4 | 32.4 | 54.4 |
| 2B GradStack (1.24x) | 1.945 | 14.2 | 37.0 | 24.5 | 51.5 | 30.2 | 64.2 |
| 2B MIDAS (1.24x) | 1.929 | 15.7 | 40.2 | 38.3 | 53.6 | 36.3 | 78.3 |
[1] Allen-Zhu, Li. Physics of Language Models: Part 3.3, Knowledge Capacity Scaling Laws. 2024
---
Rebuttal Comment 1.1:
Comment: Thanks to the authors for answering my questions and validating them with further experiments. I think this is an interesting work which would drive further research, so I am keeping my score. | Summary: The authors propose MIDAS, an efficient and effective framework for gradually increasing model depth. Their method achieves better performance on some reasoning primitive problems. Also, the authors further provides empirical analysis to support their findings.
Strengths: 1. The authors propose MIDAS, a novel variant of gradual stacking, which achieves better training efficiency that baselines.
2. Their experiments show that MIDAS significantly outperforms baselines on certain reasoning primitive problems.
Weaknesses: Can the authors evaluate on more reasoning and common-sense questions for better comparison, such as tasks tested in the Llama paper?
Minor:
In line 105, “For simplicity, we L is divisible by k” should be “For simplicity, L is divisible by k.”
Technical Quality: 4
Clarity: 4
Questions for Authors: Will the authors release the code and dataset to allow the community to further study the interesting phenomenon mentioned in section 5?
Can this phenomenon be verified with fewer resources?
Confidence: 3
Soundness: 4
Presentation: 4
Contribution: 3
Limitations: The authors need to add a separate section for limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the feedback and suggestions. We tried to address them below by running new experiments.
**Q1**: *Can the authors evaluate on more reasoning and common-sense questions for better comparison, such as tasks tested in the Llama paper?*
A: Based on the reviewer’s suggestion, we run evaluations on common sense reasoning benchmarks PiQA, HellaSwag, Winogrande, ARC-E, ARC-C from the Llama paper.
| Model | PiQA | HellaSwag | Winogrande | ARC-E | ARC-C | Average (5) |
|--|--|--|--|--|--|--|
| 1B Baseline | 75.4 | 58.7 | 59.8 | 63.2 | 31.7 | 57.8 |
| 1B MIDAS Prop-2 (1.33x) | 74.0 | 58.9 | 58.9 | 63.1 | 31.4 | 57.3 |
| 1B MIDAS Prop-3 (1.25x) | 74.0 | 60.4 | 59.0 | 62.5 | 32.7 | 57.7 |
|--|--|--|--|--|--|--|
| 2B Baseline | 75.6 | 65.0 | 62.3 | 66.9 | 35.2 | 61.0 |
| 2B MIDAS Prop-2 (1.24x) | 76.0 | 65.5 | 62.8 | 67.0 | 34.5 | 61.1 |
We did not include BoolQ because the evals are very noisy and close to “trivial” accuracy. Furthermore the trends are highly sensitive to which metric is used (accuracy vs auc-pr).
Overall, we find the MIDAS is roughly neutral related to baseline at ~25% speedup. We do not observe a strong inductive bias for these tasks. Our hypothesis is that common-sense questions require a significant component of “memorization” of world knowledge of what is and what is not plausible in the physical world.
---
**Q2**: *Will the authors release the code and dataset to allow the community to further study the interesting phenomenon mentioned in section 5?*
A: We will release the reasoning primitives dataset and code to aid future research on this topic.
---
**Q3**: *Can this phenomenon be verified with fewer resources?*
A: The observation that MIDAS (middle stacking) is better than gradual stacking also holds at the 120M parameter scale, with decoder-only models like GPT-2 small and also with masked LM with BERT-Base. We did not report these results in the paper since the focus was on larger scale models, but we can include these results in the revision. The inductive bias towards reasoning and non-trivial performance on reasoning primitives may require closer to a billion parameters to see non-trivial few-shot performance.
We will fix the typos pointed out by the reviewer and will include more discussion about limitations, particularly regarding the understanding of the inductive bias.
---
Rebuttal Comment 1.1:
Title: Thanks for the rebuttal!
Comment: Thank you for the rebuttal! I find the paper interesting, although it is still in its early stages. I keep my score.
---
Rebuttal 2:
Title: Response
Comment: Thank you for acknowledging our rebuttal and your continued interest! One point we forgot to highlight in our earlier response was the following: at **1.25x speed up**, MIDAS not only improves reasoning primitives, but also **significantly improves standard benchmarks** like **open book QA** tasks (includes TydiQA, SquadV2, DROP, QuAC, CoQA) , **story completion** (includes Lambada, StoryCloze, HellaSwag), **math word problem datasets** (ASDiv, MAWPS, SVAMP) and GSM8k finetuning, and is roughly neutral on closed book QA and commonsense tasks (based on results in Table 1). These speedup and quality improvements are also **verified at 1B, 2B and 8B parameter** scales. Our reasoning primitives were designed specifically to isolate the factors leading to reasoning benefits, and so the improvements there are even higher there.
Hopefully this convinces the reviewer that the results are fairly mature not early stage. | Summary: Gradual stacking involves incrementally growing a model by stacking its last few layers to initialize the next stage. A new variant called MIDAS (MIDdle grAdual Stacking) is proposed, which stacks the middle block of layers instead of the last block. This method is found to be more efficient and shows a bias towards improving reasoning tasks. MIDAS demonstrates an inductive bias that enhances performance on downstream tasks requiring reasoning, despite similar or slightly worse perplexity compared to baseline training. This inductive bias was analyzed using reasoning primitives (simple synthetic tasks) which revealed that models pretrained with stacking perform better on these primitives without fine-tuning.
Strengths: 1. This paper proposed a novel stacking algorithm for improving reasoning.
2. Experiments show that the algorithm improves performance on four distinct types of reasoning benchmarks.
3. The Deep dive into reasoning section shows very interesting insights on reasoning.
Weaknesses: The authors showed empirical evidence that the proposed algorithm improves reasoning, but reduces memorization. However, the experiments are done on four distinct categories of reasoning benchmarks. I think at least one more reasoning benchmark is needed in each category in order to justify that the improvement difference is indeed due to memorization instead of some dataset artiacts.
Technical Quality: 3
Clarity: 4
Questions for Authors: Does it make sense to test on existing synthetic reasoning benchmarks instead of just synthetic reasoning primitives? For example, ProofWriter: https://arxiv.org/abs/2012.13048.
Confidence: 4
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: Yes, the authors adequately addressed the limitations
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the positive feedback and suggestions.
**Q1**: *I think at least one more reasoning benchmark is needed in each category in order to justify that the improvement difference is indeed due to memorization instead of some dataset artifacts.*
A: Based on the reviewer’s suggestion, we evaluated MIDAS and baseline models on another category of tasks called “story completion”, which includes tasks like LAMBADA, StoryCloze and Hellaswag. These measure the model's ability to correctly complete a story or premise. These tasks are also closer to reasoning, since the completion needs to be figured out from the given premise. The results for the 1B UL2 MIDAS models are presented below. Please refer to the response to Reviewer s9u7 for more evaluations on the story completion category.
| Model | Lambada | StoryCloze | HellaSwag |
|--|--|--|--|
| 1B Baseline | 16.1 | 73.1 | 58.7 |
| 1B MIDAS-Prop2 (1.33x) | 18.4 | 74.6 | 58.9 |
| 1B MIDAS-Prop3 (1.25x) | 25.5 | 74.9 | 60.4 |
**Q2**: *Does it make sense to test on existing synthetic reasoning benchmarks instead of just synthetic reasoning primitives? For example, ProofWriter*
A: Thank you for the suggestion. One of the motivations behind new reasoning primitives was to evaluate these models on very simple and basic tasks that can isolate the benefits of MIDAS, which we could not do successfully with existing benchmarks. That said, it also makes sense to test on more complex synthetic tasks like ProofWriter. Unfortunately, we were not able to set up this evaluation task in time for the response since we prioritized some other experiments, but we will try to include this in the revision. | Summary: The paper proposes an improvement of the gradual stacking method proposed in Reddi et al. 2023 for efficient training. The improved method relies on an observation that stacking the layers at the end exhibits the similarity between layers at the end and this might be a suboptimal choice but stacking the layers in the middle layers can exhibit the similarity between the layers in the middle layers (similar to looped models). Stacking the middle layers improves the performance on reasoning tasks such as open book QA and math word problem tasks. The paper also shows that even though the validation log perplexity of baseline and gradual stacking methods are similar to the proposed method, the proposed method achieves better performance on the downstream reasoning tasks. At the end, the paper proposes some simple tasks such as induction copying, variable assignment, and pre-school math tasks that might be responsible for improvement on reasoning tasks.
Strengths: - The reasoning behind stacking in the middle rather than at the end is simple and seems effective in experiments.
- It is interesting that the paper tried to characterize the improvement of stacking in the middle by characterizing its connection to the looped transformers and showing the benefits in the reasoning-related tasks. The paper also provides extensive experiments to support their claims in this regard.
- The paper is well-written and easy to follow overall.
Weaknesses: - In section 5, the paper conjectures that induction copying, variable assignment, and pre-school math tasks are some of the core capabilities of contextual reasoning that can help characterize the inductive bias of the proposed approach. The proposed method MIDAS achieves a significant improvement in these tasks. However, the improvement of MIDAS on downstream tasks (open book QA and math word problems) is marginal compared to the improvements on these tasks. This discrepancy raises the question if there are synthetic tasks for which MIDAS is not performing as well as the baseline which is affecting the performance on downstream tasks.
- There are some typos in the paper:
- This on line 103?
- Line 105?
Technical Quality: 3
Clarity: 3
Questions for Authors: See the weakness section.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The paper discusses some of the limitations in reasonable detail.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the positive feedback and insightful questions.
**Q1**: *the improvement of MIDAS on downstream tasks (open book QA and math word problems) is marginal compared to the improvements on these tasks [reasoning primitives].*
A: While the improvement on these benchmarks is lower compared to improvement on primitives, the absolute magnitude of improvement is still quite high. For instance, for the 1B model, MIDAS with Prop-2 schedule improves open book QA from 33.3 -> 36.3 (+9% relative) and math word problems from 23.5 -> 29.0 (+23% relative). The reviewer is correct in observing that improvements on reasoning primitives from 35.2 -> 58.4 (+65% relative) are even larger. Our hypothesis is the following: solving each benchmark dataset requires some combination of different skills like memorization, reasoning, etc. with varying proportions. Based on the analysis from Section 4.2, we observed that MIDAS helps a lot more on tasks that require more reasoning (open book QA) than the memorization version of the same QA tasks. Reasoning primitives were designed to specifically isolate reasoning skills and minimize the need for memorization. Thus, we believe that MIDAS improves the most on primitives. Tasks like open book QA do require some reasoning from context, however, they can also benefit a lot from memorization of facts and world knowledge where MIDAS does not improve over baseline. (For instance the model may choose to ignore the context and answer from memory, since after all it has decent closed book QA accuracy). This, we believe, can dampen the magnitude of improvements. Math word problems intuitively require more reasoning than memorization compared to open book QA, and thus improvements are larger. We believe that understanding this interplay between reasoning and memorization is a very interesting and important future direction.
**Q2**: *... raises the question if there are synthetic tasks for which MIDAS is not performing as well as the baseline*
A: That is a good question. Since “memorization” is the slice where MIDAS has the least improvement (slight regression in some cases), we would need to construct a synthetic dataset that tests for memorization abilities. Even on closed book QA benchmarks, MIDAS is almost neutral in various cases, so finding a task where MIDAS is substantially worse would be an interesting direction. We could not come up with a very good synthetic task to isolate this effect, and if the reviewer has some suggestions, we are happy to try those.
---
Rebuttal Comment 1.1:
Comment: Thank you for the rebuttal! It is good that similar results also hold when the model is trained with the Causal LM objective. I am keeping my current score. | Rebuttal 1:
Rebuttal: We thank the reviewers for constructive feedback and their appreciation for the results in the paper. We have responded to reviewer questions independently. Following reviewer suggestions, we ran and reported the following new experiments:
- Run GradStack for the 2B UL2 model that was missing in Table 1. MIDAS continues to be much better than GradStack (see Q4 of reviewer s9u7)
- Pretraining with causal LM objective on 1B model. Trends are very similar to UL2 pretraining objective (see Q2 of reviewer s9u7)
- 8B pretraining with 100B tokens to verify that the results scale with model size. MIDAS @ 1.26x speedup has better downstream evals than baseline training, and is more robust to choice of learning rate. (see Q1 of reviewer s9u7)
- Add new evaluations on another category of tasks, “story completion”, that includes Lambada, StoryCloze and HellaSwag. The inductive bias of MIDAS towards reasoning shows up here too. (see Q1 of reviewer 1oxX)
- More evaluations on common sense tasks from Llama paper (PiQA, HellaSwag, WinoGrande, ARC-E, ARC-C). MIDAS @ 1.25x speedup is neutral on average compared to baseline training. (see Q1 of reviewer 48NW)
We hope this addresses any remaining concerns by the reviewers, and we are happy to engage in more discussions. | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.